Logic Circuit Design: Selected Topics and Methods 3031406729, 9783031406720

The 2nd edition has been thoroughly revised and is intended as a wakeup call in the stagnant and dormant field of switch

124 57 13MB

English Pages 252 [237] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Part I Combinational Circuits and Switching Algebra
Introduction to Part I
1 Logic Variables & Events
1.1 Specifying a Circuit in Plain Prose
1.2 Analogue and Binary Timing Diagrams
1.3 Events Graph and Events Table
1.4 Logic Variables and Logic Formulas
1.5 Drawing the Logic Circuit
2 Switching Devices
2.1 Pneumatic Valves
2.2 Electric Relays
Inverting a Relay Circuit
Realising Feedback
2.3 CMOS Transistors
3 Elementary Logic Functions
3.1 Logic Functions
3.2 Basic Gates
3.3 Using AND, OR and NOT
3.4 Basic Laws
3.5 Single-Variable Formulas
3.6 Commutative and Associative Laws
3.7 Distributive Laws
3.8 Generalised DeMorgan Theorems
3.9 Tautologies and Truth Tables
4 Normal Forms and Circuit Design
4.1 Minterms and Maxterms
4.2 Canonical Normal Forms
4.3 Using Canonical Normal Forms
4.4 Zhegalkin Normal Form
4.5 Dual Zhegalkin Normal Form
5 Veitch-Karnaugh Maps
5.1 How to Draw a Veitch-Karnaugh Map
5.2 Karnaugh Set and Generalised Minterms
5.3 Proving and Developing Theorems
5.4 Evaluating Veitch-Karnaugh maps
5.5 Karnaugh Trees & Map-Entered Variables
6 Adjacency and Consensus
6.1 Adjacent K-Sets and their Consensus
6.2 Formalising Adjacency
6.3 Formalising Consensus
6.4 When is one K-set a Subset of another?
7 Algebraic Minimisation
7.1 Finding the Full Cover
7.2 Finding Minimal Covers
7.3 Minimisation considering Don't Cares
8 Design by Composition
8.1 The Basic Concept
8.2 Catenation
8.3 Visualising the Composition Problem
8.4 Choosing a Generic Function
8.5 Composing an Example Circuit
Reviewing Part I
Part II Theory of Latches
Introduction to Part II
9 Basic Theory of Latches
9.1 What is a Latch?
9.2 The Memory Function
Specifying a Memory Function in a VK-map
Reading the Reduced VK-map of a Memory Function
9.3 Introducing Inclusions and Exclusions
9.4 Basic Memory Evaluation-Formulas
9.5 Generalised Memory Evaluation-Formulas
10 Designing Feedback Latches
10.1 Feedback Evaluation-Formulas
10.2 Memorisation Hazards
Memorisation Hazards in Disjunctive Latches
Memorisation Hazards in Conjunctive Latches
10.3 Delayed Feedback
10.4 Fail-Safe Latch Design
10.5 Fail-Safe Latches with Mutually Inverted Outputs
11 Designing Latches by Composition
11.1 Theory of Latch Composition
11.2 Example Illustrating Latch Composition
11.3 Eccles-Jordan Latch
11.4 Using Eccles-Jordan as a Hinge Latch
Reviewing Part II
Part III Asynchronous Circuits
Introduction to Part III
12 Specifying Asynchronous Circuits
12.1 Developing an Events Graph
12.2 What is an Events Tree?
12.3 What is a Flow Table?
13 Employing Events Trees
13.1 From Events Graph to Events Tree
13.2 Calculating and Drawing the Circuit
13.3 Direct Specification of Feedback
13.4 Verification versus Reverse Engineering
14 Flow Table Development and Usage
14.1 Direct Specification of a Flow Table
14.2 Events Graph to Flow Table
Vingron — Deterministic Sub-Sequences of Input Events
Sandige — Transmitting all Columns of the Events Graph
14.3 Circuit Design by Iterative Catenation
14.4 Compound Self-Reference Table
14.5 When Are Two Flow Tables Equivalent?
Reviewing Part III
Part IV OCTOPUS — A Programming Language for PLCs
Introduction to Part IV
15 Historical Remarks and Programming Overview
15.1 IEC 61131-3 Programming Languages
15.2 The Precursor to OCTOPUS
15.3 Basic Modules of an OCTOPUS Circuit
15.4 Circuit Development using Templates
15.5 Distributed Intelligence of OCTOPUS
16 Designing and Testing on a Computer
16.1 Development Platform logi.CAD
16.2 Modelling Actuators
17 Sequential Function Chart and Sequential Chain
17.1 Sequential Function Chart
17.2 OCTOPUS'S Sequential Chain
17.3 Multi-Path Programs
17.4 Choosing Alternative Paths
17.5 Repeating Sub-Cycles
18 Operating a Transition Controlled Process
18.1 Input Commands and State Visualisation
18.2 Automatic Mode of Operation
18.3 Interrupt, Continue, Next Step
18.4 Manual Mode and False Transition Signal
18.5 Emergency Stop
18.6 Reset
19 The SCV-Module
19.1 SCV-Circuit and Display Panel Layout
19.2 Input Control Commands and Status Visualisation
20 Actuator Control Units
20.1 ACUs for Identity Actuators
20.2 ACUs for Memory Actuators
20.3 On the Usage of the FS Output Signals
Reviewing Part IV
Correction to: Logic Circuit Design
Correction to:Chapters 1 and 2 in: S. P. Vingron, Logic Circuit Design,https://doi.org/10.1007/978-3-031-40673-7
Bibliography
Selected Books
Selected Papers
Backmatter
Index
Recommend Papers

Logic Circuit Design: Selected Topics and Methods
 3031406729, 9783031406720

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Shimon P. Vingron

Logic Circuit Design

Selected Topics and Methods Second Edition

Logic Circuit Design

Shimon P. Vingron

Logic Circuit Design Selected Topics and Methods Second Edition

Shimon P. Vingron Hinterbrühl, Austria

ISBN 978-3-031-40672-0 ISBN 978-3-031-40673-7 https://doi.org/10.1007/978-3-031-40673-7

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2012, 2024, corrected publication 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

To the memory of my wife, Dora, the spirit of the family, the companion and love of my life.

Preface

The brief version of an introduction to this book is: All general information I wish to impart on the subject matter and on the book itself is put into the introductions and reviews of each of the book’s four parts — summaries I encourage you to look up. Comments are always welcome at [email protected]. On the other hand, you might prefer the following more conventional (albeit less informative) introduction: This book contains selected topics and methods on the design of logic circuits. The topics are organised into four divisions or parts. Part I is a compact presentation of switching algebra and its application to the design or synthesis of combinational circuits, circuits that have no memorising ability. There is one method you won’t find in other publications, and that is the design by composition which is based on the catenation of functions. You will encounter this method again in Part II, and yet again, in the form of iterative catenation in Part III. Thus, catenation becomes a central method in switching algebra. Part II is a short but in-depth analysis of the theory of binary memory which is the backbone to the design of feedback latches. Notably, the theory of feedback latches states (unequivocally) that latches are not allowed to have delays in their feedback. This elimination of time dependence in the theory of feedback latches is what allows us to apply switching algebra to their design because switching algebra (being derived from formal or mathematical logic) is non temporal. Feedback latches are prone to failures in memorisation. But these so-called memorisation hazards can be completely avoided by switching algebraic means (sic!) making these latches fail-safe. Part III discusses the design of asynchronous sequential circuits. These are circuits that have memorising abilities to various degrees, and only switch when the present value of an input variable changes. Again the prime problem is developing a non-temporal theory so as to be able to apply switching algebra. This theory is based on a tree representation (the events tree) of a switching problem. On the one hand the events tree can be used to directly calculate the sequential circuit (by employing the theory of latches developed in Part II), while on the other hand an algorithmic process called iterative catenation leads to a pre-encoded, state transition table with vii

viii

Preface

a minimum number of internal states thus avoiding the main problems with the conventional design technique. Part IV addresses a shortcoming of switching algebra when applying it to real world industrial problems. A completely different design philosophy must take hold to cope with the large amount of data that needs to be managed. Here we consider industrial transition controlled processes governed by programmable logic controllers (PLCs), and develop for these PLCs universally applicable logic modules that can be put together in a LEGO-like fashion (with no or only a minimal knowledge of switching algebra) to govern any given transition controlled process. In this final part we delve quite deeply into the control of industrial processes. Shimon P. Vingron

Contents

Part I Combinational Circuits and Switching Algebra Introduction to Part I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1

Logic Variables & Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Specifying a Circuit in Plain Prose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Analogue and Binary Timing Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Events Graph and Events Table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Logic Variables and Logic Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Drawing the Logic Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 4 7 9 11

2

Switching Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Pneumatic Valves. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Electric Relays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 CMOS Transistors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13 13 17 22

3

Elementary Logic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Logic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Basic Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Using AND, OR and NOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Basic Laws. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Single-Variable Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Commutative and Associative Laws. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Distributive Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Generalised DeMorgan Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Tautologies and Truth Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27 27 30 33 33 35 36 37 38 40

4

Normal Forms and Circuit Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Minterms and Maxterms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Canonical Normal Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Using Canonical Normal Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Zhegalkin Normal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Dual Zhegalkin Normal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43 43 45 46 48 50 ix

x

Contents

5

Veitch-Karnaugh Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 How to Draw a Veitch-Karnaugh Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Karnaugh Set and Generalised Minterms . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Proving and Developing Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Evaluating Veitch-Karnaugh maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Karnaugh Trees & Map-Entered Variables. . . . . . . . . . . . . . . . . . . . . . . . . .

53 53 55 59 61 65

6

Adjacency and Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Adjacent K-Sets and their Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Formalising Adjacency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Formalising Consensus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 When is one K-set a Subset of another?. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71 71 74 76 77

7

Algebraic Minimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Finding the Full Cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Finding Minimal Covers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Minimisation considering Don’t Cares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79 79 82 85

8

Design by Composition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 The Basic Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Catenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Visualising the Composition Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Choosing a Generic Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Composing an Example Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89 89 90 92 93 95

Reviewing Part I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99

Part II Theory of Latches Introduction to Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 9

Basic Theory of Latches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 What is a Latch? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 The Memory Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Introducing Inclusions and Exclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Basic Memory Evaluation-Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Generalised Memory Evaluation-Formulas . . . . . . . . . . . . . . . . . . . . . . . . .

103 103 105 108 110 112

10

Designing Feedback Latches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Feedback Evaluation-Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Memorisation Hazards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Delayed Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Fail-Safe Latch Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Fail-Safe Latches with Mutually Inverted Outputs . . . . . . . . . . . . . . . . .

115 115 118 121 124 126

11

Designing Latches by Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Theory of Latch Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Example Illustrating Latch Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Eccles-Jordan Latch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Using Eccles-Jordan as a Hinge Latch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

129 129 134 135 136

Contents

xi

Reviewing Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Part III Asynchronous Circuits Introduction to Part III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 12

Specifying Asynchronous Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Developing an Events Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 What is an Events Tree? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 What is a Flow Table?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

143 145 146 148

13

Employing Events Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 From Events Graph to Events Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Calculating and Drawing the Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Direct Specification of Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Verification versus Reverse Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . .

153 153 155 156 160

14

Flow Table Development and Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Direct Specification of a Flow Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Events Graph to Flow Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Circuit Design by Iterative Catenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Compound Self-Reference Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 When Are Two Flow Tables Equivalent? . . . . . . . . . . . . . . . . . . . . . . . . . . .

165 165 166 169 171 172

Reviewing Part III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Part IV OCTOPUS A Programming Language for PLCs Introduction to Part IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 15

Historical Remarks and Programming Overview . . . . . . . . . . . . . . . . . . . . . . 15.1 IEC 61131-3 Programming Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 The Precursor to OCTOPUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.3 Basic Modules of an OCTOPUS Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.4 Circuit Development using Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.5 Distributed Intelligence of OCTOPUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

179 179 181 184 185 188

16

Designing and Testing on a Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 16.1 Development Platform logi.CAD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 16.2 Modelling Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

17

Sequential Function Chart and Sequential Chain . . . . . . . . . . . . . . . . . . . . . . 17.1 Sequential Function Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 OCTOPUS’S Sequential Chain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Multi-Path Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.4 Choosing Alternative Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.5 Repeating Sub-Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

Operating a Transition Controlled Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 18.1 Input Commands and State Visualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

195 195 198 199 200 201

xii

Contents

18.2 Automatic Mode of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Interrupt, Continue, Next Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4 Manual Mode and False Transition Signal . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5 Emergency Stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6 Reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

206 208 210 212 213

19

The SCV-Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 19.1 SCV-Circuit and Display Panel Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 19.2 Input Control Commands and Status Visualisation . . . . . . . . . . . . . . . . . 217

20

Actuator Control Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1 ACUs for Identity Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 ACUs for Memory Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 On the Usage of the FS Output Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

223 224 228 228

Reviewing Part IV. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Correction to: Logic Circuit Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C1

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Backmatter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

Part I

Combinational Circuits and Switching Algebra

Introduction to Part I

Combinational circuits are the simplest binary switching-circuits as they have no memorising ability. Because they occur in their own right, and are also used in all circuits which do have a memorising ability, they are basic to all applications and thus fundamentally important. Nevertheless, it took roughly one-hundred years from their advent in 1835, when the Scottish-American scientist J. Henry invented the electro-magnetic relay, to Shannon [1938] influential paper ‘A Symbolic Analysis of Relays and Switching Circuits’, the paper that founded switching algebra. From this late start onward, switching theory has remained technology driven. Even before Veitch [1952] and Karnaugh [1953] had put forth the so-called VeitchKarnaugh map for simplifying combinational circuits, the transistor had been invented (in 1948), and we were already working with the third technological computer generation — the first using electro-magnetic relays, the second electronic radio valves, and the third using transistors. At this point in the development we still had no idea how to calculate simple memory circuits, let alone general circuits with memorising ability, so-called sequential circuits. All such circuits were developed intuitively. The topics chosen for presentation in this part are a balanced extract of problems commonly encountered when working with combinational circuits. It is easy to list many further areas that are of basic interest, but being reticent herein allows us to focus our sights. The material of this part is based on Vingron [2004] but much of it has been reworked quite basically, especially Chapters 5, 6 and 8.

Chapter 1

Logic Variables & Events

Switching circuits process binary input signals logically, the result itself being a binary output signal. The logical operations used in processing the input signals require the binary signals to be transformed into logic variables. Giving substance to the previous two sentences is the subject of this chapter. The first 3 sections concentrate on the specification of circuits, here, Section 1.3 introducing two tools of importance: the events graph and the events table. The prime result of this chapter is the introduction of logic variables and logic formulas in Section 1.4.

1.1 Specifying a Circuit in Plain Prose Let us employ a running example of a switching circuit to introduce some basic concepts. As formal methods for describing a circuit’s behaviour are, as yet, lacking, we shall do the obvious, and use plain prose to specify a running example: Imagine a windowless room with two doors and a light switch at each door. Every time someone passes through a door (to become the sole occupant of the room, or to leave it empty) he brings the switch next to that door from its current to its alternative position, thereby switching the lights on if they were off, and off if they were on. Correspondingly, the room’s lights go on if they were off, and vice versa, go off if they were on. Thus, the basic idea is: whenever someone enters the room he turns the electric lights on, and then turns them off when he leaves the room by the same, or by the other door.

The original version of the chapter has been revised. A correction to this chapter can be found at https://doi.org/10.1007/978-3-031-40673-7_21 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024, corrected publication 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_1

3

4

1 Logic Variables & Events

Our task is to design a circuit that takes as its two input signals the current (or voltage) from the two switches, and produces as output signal the lighting condition in the room. There are a few things to note in the above specification. Firstly, care was taken to state the number of inputs and outputs to the circuit. Secondly, each of the circuit’s signals occurs in either of two distinctly different states: The current, voltage, or lights are either on or off. Signals with this property are called binary signals. Usually the two alternative states of a binary signal are referred to by the integers 0 and 1. The 0 is most frequently associated with the lower energy level of a binary signal, the 1 with the higher energy level. (If the energy level of a signal’s state is irrelevant, one is free to associate 0 and 1 with the signal’s states in any way convenient.) Less obvious aspects of the above specification are: The wording of the specification is redundant. For human beings redundancy is helpful in grasping things more quickly, and often more thoroughly. Formal systems (on the other hand) need no redundancy (whatsoever). Finally, the above specification is complete, meaning that it contains all the information needed to design the circuit; you will soon see how this statement can be verified.

1.2 Analogue and Binary Timing Diagrams Casting the usually cumbersome verbal specification into a technical form is frequently done via a timing diagram, a diagram which depicts the time-dependent behaviour of a signal. A timing diagram for our running example is shown in Figure 1.1. This being our first acquaintance with a timing diagram, we ponder it and its relationship to the running example quite conscientiously. t0 :

.

t1 :

.

At an arbitrarily chosen starting time, .t0 , when you open a door to enter the room, assume that the contacts of both switches are open letting no current flow. Accordingly, we interpret both input signals to the circuit to be 0. The lights will be off, and we again choose the integer 0 to describe this state. Entering the room, you activate, say, switch 1. The current that can now flow over the closed contacts of the switch is referred to (when flowing freely) by the integer 1. But, current cannot jump from a zero value to its maximum

Fig. 1.1 A timing diagram of the running example

1.2 Analogue and Binary Timing Diagrams

t2 :

.

t3 :

.

t4 :

.

5

value: It first rises slowly, then ever faster until it almost reaches its maximum value from when on it grows ever more slowly towards its maximum of 1. Only at time .t1 , when the current has reached a certain height (say, 80% of the maximum value), will the circuit register that the input signal has become 1, and will start turning the lights on. As with the current, it takes time for the lights to obtain their full brightness, as the form of the post-.t1 output signal depicts. You walk through the room leaving it via the opposite door and bringing switch 2 into its alternative state (a value of 1 for switch 2) to turn the lights off as you leave (letting the value for the lights drop to zero). Again, note the gradual rise of the value for .x2 , and the gradual decline of the output value y. To turn the lights on, a person entering the room tips the switch at hand (say switch 1). Please take a few moments to consider the relationship between the falling signal (.x1 ) and the rising signal (y). When leaving the room by the opposite door, switch 2 will be brought to its alternative state to turn the lights off.

We now take a closer look at the concept of a signal, and the way analogue signals are transformed into binary signals. A signal is the information carried by, or contained in, a quantifiable physical entity (such as voltage, speed, force, etc.). Colloquially, the physical entity itself (and not the information it carries) is frequently called the signal. To be independent of any physical entity and its units of measurement (e.g., Volt, m/s, Newton) we let any physical entity being considered vary continuously between 0 and 1. Signals that can change their values continuously are called analogue signals. An example of an analogue signal is given in Figure 1.2a. For all practical purposes, physical entities in a real-world

Fig. 1.2 Transforming an analogue signal (a) into a binary signal (b) with the help of an ideal hysteresis (c)

6

1 Logic Variables & Events

Fig. 1.3 Developing the binary timing diagram (b) from an analogue timing diagram (a)

environment change their values continuously. But switching circuits use binary signals, signals whose values are either 0 or 1, and thus are supposed to jump instantaneously from one value to another. Clearly, it is quite a challenge to transform real-world analogue signals into binary signals. This is done with the help of an ideal hysteresis, a function with an input for the analogue signal, and an output for the binary signal. The input-output behaviour of the hysteresis is shown in Figure 1.2c and explained in the text of the figure. Electronically, the ideal hysteresis is well approximated by a circuit known as a Schmitt trigger. Let us apply the hysteresis function to the analogue signal of Figure 1.2a. The instant the analogue signal (which is fed into the hysteresis) is switched on, it starts rising, first slowly and then with ever increasing speed. Only when it reaches a certain high level H does the output of the hysteresis jump from 0 to 1. Note the time lag .H t from the instant the analogue signal is switched on to the moment when the binary output signal becomes 1. The analogue signal continues to rise until it reaches its 1-value. When it is switched off, it starts declining. Only when it has dropped to low level L does the output signal of the hysteresis jump from 1 to 0. Here too, we register a time delay, .L t, from when the analogue signal is switched off to the moment the binary signal drops to 0. We are now well equipped to transform our analogue timing diagram, repeated in Figure 1.3a, into its binary equivalent of Figure 1.3b. As the instant when an input signal jumps from 0 to 1, we choose the moment when the analogue signal reaches the H -level. As the instant when an input signal jumps from 1 to 0, we choose the moment when the analogue signal drops to the L-level. The instantaneous zero-to-

1.3 Events Graph and Events Table

7

one transitions, and the one-to-zero transitions of the output signal are governed by the concept of an ideal hysteresis. The binary timing diagram of Figure 1.3b implies a causal dependence of the output signal y on the input signals .x1 and .x2 that is quite misleading. The implication is that the rising edge of .x1 at .t1 causes the output y to jump (instantaneously) from 0 to 1 after the time interval .H t has elapsed. On the other hand, the rising edge of y at .t3 + H t seems to be caused by the falling edge of .x1 at .t3 . Do note that, in a like manner, both the rising and falling edges of .x2 seem to cause the falling edges of y. If we wanted to design circuits along these lines, we would quickly find ourselves in rather deep water. To avoid this, we introduce an idealisation that allows a simpler conceptual and mathematical approach.

1.3 Events Graph and Events Table Mathematically, the problem with the binary timing-diagram of Figure 1.3b is that the output variable y is not a function of the ordered pair .(x1 , x2 ) of input variables .x1 and .x2 : Every occurring input pair — .(0, 0), (0, 1), (1, 0), and .(1, 1) — maps to 0 as well as to 1. Let it suffice to point this out for only one input pair, for .(x1 , x2 ) = (1, 1) which is present from .t2 to .t3 . From .t2 to .t2 + L t the output is 1, while from .t2 + L t to .t3 the output is 0. As can be seen from the diagram of Figure 1.3b, the problem with two output values per input pair stems from the presence of the time lags .L t and .H t. To remedy this situation we postulate circuits to be well-behaved, i.e., a circuit’s output may only change when the value of at least one input variable changes. As desired, postulating that a circuit be well behaved reduces the time delays in Figure 1.3 to zero. We call the binary timing-diagram of a well behaved circuit an events graph. The events graph for our running example is shown in Figure 1.4. Superficially, the events graph seems to differ insignificantly from the binary timing diagram. But the basic and very important difference is that the events graph is not a time dependent representation of signals.

Fig. 1.4 An events graph for the running example

8

1 Logic Variables & Events

The events graph consists of a sequence of input events and associated output values, an input event being an ordered n-tuple .(e1 , e2 , . . . , en ) of arbitrary but fixed values .e1 , e2 , . . . , en of the input variables .x1 , x2 , . . . , xn , respectively. One draws the input events of an events graph successively from left to right in a sequence one finds appropriate in describing the problem at hand, parting two neighbouring input events by vertical lines1 , and then numbering the sequence of input events successively from left to right; each of these numbers is called a location .λ, and gives the position of an input event x (together with the associated output value) within the chosen sequence of input events. The output value in a location is constant because we postulate our circuits to be well behaved. As no signal, neither input nor output signal, changes its value between the vertical lines it is irrelevant how far apart they are drawn. Frequently, one draws them in equal distances. The events graph for our running example has the important property of expressing the output value of the circuit as a function of the input values. A function always maps the input event, an ordered n-tuple .(e1 , e2 , . . . , en ) of individual input values .e1 , e2 , . . . , en , to one and the same output value, and does so for each input event. For instance, at locations 1 and 5 the input event is .(0, 0) which in both cases is mapped to the output value 0. If, at location .λ = 5, the output were 1, the output would not be a function of the input events — it would be a relation. A relation will map at least one input event to more than one output value. An events graph which depicts the output values as a function of the input events specifies a combinational circuit; in the case of a relation it specifies a sequential circuit. Sequential circuits are the topic of Parts Two and Three. Each individual value .ei of an input event .(e1 , e2 , . . . , en ) is, in the case of binary circuits, either 0 or 1 so that each such event represents an n-digit binary number. Most frequently we express these binary numbers by their decimal equivalents e=

n 

.

ei · 2n−i .

i=1

It is characteristic for an events graph of a combinational circuit that whenever a certain input event e is depicted, it initiates the same output value. This possible redundancy in the representation is avoided when using an events table instead of an events graph. The events table has one column for each input event, the bottom row containing the output values (unique for each input event). The events table for our running example is shown in Figure 1.5a. For clarity (i.e., to concentrate better on the information — the 0s and 1s — of the table), as well as to reduce the amount of writing necessary, we usually use the events table in its brief form, shown in Figure 1.5b, in which the variables have been ‘extracted’ from the columns of each row. From the top to the penultimate

1 In

other words: We draw a vertical line whenever at least one input variable changes its value.

1.4 Logic Variables and Logic Formulas

λ5 λ1

λ4

x=

0

1

2

3

x1 =0 x1 =0 x1 =1 x1 =1

x1 =

0

0

1

1

x2 =0 x2 =1 x2 =0 x2 =1

x2 =

0

1

0

1

y=

0

1

1

0

y=0

y=1

λ2

y=1

λ3

9

y=0

a) Explicit form

b) Brief form

Fig. 1.5 Events table corresponding to the events graph of Figure 1.4

row, the succession of 0s and 1s of a single column is its input event e. An events table whose variables are not extracted is called the explicit form of the table. An advantage of the events table is that it enables us to recognise when a combinational circuit is completely specified; this is the case when an output value, either a 1 or a 0, is specified for every input event, i.e., for every column. If a circuit specification does not state what output value is initiated by each of the input events, it is said to be incomplete. Most technical problems — thankfully — lead to incompletely specified circuits allowing us to choose output values which lead to simpler realisations. Our running example, as mentioned, is completely specified. In the next section we turn to the interpretation of the events table, this leading us directly into the field of logic.

1.4 Logic Variables and Logic Formulas The events table is a tool of major importance: Not only does it enable an unambiguous specification of a combinational circuit, its analysis also leads directly to the introduction of logic variables and to the logical evaluation of functions. Let us start with an informal, yet stringent, interpretation of the events table of our running example, the table repeated below. It states when the output y is 0 and when it is 1. This is expressed in a semi-formal manner in the sentence next to the table.

.

x1 = 0 x1 = 0 x1 = 1 x1 = 1 x2 = 0 x2 = 1 x2 = 0 x2 = 1 y=0 y=1 y=1 y=0

.

The output variable y is 1 if and only if x1 is 0 AND x2 is 1 OR x1 is 1 AND x2 is 0

10

1 Logic Variables & Events

Employing the symbols .⇔

... ... ...

.∧ .∨

for logically equivalent or if and only if (iff ), for AND, and for OR

the sentence to the right of the table can be written as:     .(y = 1) ⇔ (x1 = 0) ∧ (x2 = 1) ∨ (x1 = 1) ∧ (x2 = 0) .

(1.1)

Before continuing, take a few moments to ponder the above formula, more precisely, the expressions .y = 1, .xi = 0, and .xi = 1 in the inner parentheses. Stating that a certain input variable .xi is 0 does not mean that .xi really is 0. In fact, we have to measure the value of .xi to know its value. Let us refer to a measured value of .xi as an instance of .xi . We could then substitute the measured value, its instance, for .xi in .xi = 0 obtaining either .0 = 0 or .1 = 0, as the case may be. These expressions are propositions. A proposition is a statement that is either true (which we denote as 1) or false (denoted as 0). The term .xi = 0, on the other hand, has only the form of a proposition, and is thus called a propositional form; it acts as a logic variable, a variable whose value is either 1 or 0. Specifying each input .xi to have a certain value, and stating the associated value of the output y, provides us with a single column of the events table. The rather cumbersome expression (1.1) is greatly simplified by introducing the following logic variables .Xi , .Xi , Y , and .Y as a shorthand notation for the expressions written within the inner parentheses, i.e. defining

.

Xi :⇔ xi = 1,

X i :⇔ xi = 0

Y :⇔ y = 1,

Y :⇔ y = 0

(1.2)

Rewriting (1.1) with the help of the logic variables .Xi , .X i , Y , and .Y leads to what is called a logic formula or Boolean formula: Y ⇔ (X1 ∧ X2 ) ∨ (X1 ∧ X 2 ).

.

(1.3)

Colloquially, the symbol .⇔, called logical equivalence, refers to its left and right sides being equal from the point of view of logic, allowing us — among other things — to use the left-hand and right-hand expressions interchangeably in logic formulas (substituting one by the other). The symbol .:⇔ takes a (single) logic variable on its left, this being used as a shorthand notation for a logic expression on the right of the symbol .:⇔. The symbol .:⇔ can be read as ‘. . . is defined to be logically equivalent to . . . ’. Alternatively to stating when y is 1, we could have used the events table to state precisely when the output variable y is 0, writing this in analogy to (1.1) as     (y = 0) ⇔ (x1 = 0) ∧ (x2 = 0) ∨ (x1 = 1) ∧ (x2 = 1) ,

.

(1.4)

1.5 Drawing the Logic Circuit

11

which, with (1.2), may be written as the logic formula Y ⇔ (X1 ∧ X 2 ) ∨ (X1 ∧ X2 ).

.

(1.5)

The formulas (1.3) and (1.5) are said to be complementary.

1.5 Drawing the Logic Circuit A logic circuit is comprised of gates, these being graphic symbols representing logic connectives. The gates for AND, OR and NOT (INVERSION) are shown in Figure 1.6. To invert an input signal you draw a small circle where the input signal enters the gate. If you want to invert an output signal, you draw a small circle where the output signal leaves the gate. Logic formulas, as (1.3) and (1.5), state which gates to use and how to connect them. Connecting gates to create a logic circuit. To transform a logic formula into a circuit consisting of gates, you draw an AND gate for each .∧-connective of the formula, and an OR-gate for each .∨-connective. The arguments of the .∧- and .∨-connectives in the logic formula are taken as inputs to the associated gates, this ensuring the correct connection between the gates. Some people like to draw a circuit starting with the innermost parentheses of a formula, thus drawing the circuit from the inputs to the output. Others prefer to start with the outermost parentheses, thus drawing the circuit from the output to the inputs. Use whichever method you feel more comfortable with. The logic circuits of Figure 1.7 all represent the same formula, i.e. (1.3). Figure 1.7b demonstrates that you can stretch a gate to a desired size, while Figure 1.7c shows that gates may be densely packed. The output signal, by the way, need not be positioned in the middle of a gate. To fully understand these gates, we need to know precisely how their output signals react to the input signals. This is shown in Figure 1.8 for the NEGATION, AND gate, and OR gate. Red lines represent TRUTH (i.e. .1), blue lines indicate FALSITY (i.e. .0).

Fig. 1.6 Gates as graphical symbols for logical connectives

12

Fig. 1.7 Drawing the circuit for .Y ⇔ (X1 ∧ X2 ) ∨ (X1 ∧ X2 )

Fig. 1.8 Explaining the behaviour of NEGATION, AND, and OR

1 Logic Variables & Events

Chapter 2

Switching Devices

Logic gates, as introduced in the previous chapter, represent the connectives used in logic formulas. Assume you have drawn a circuit diagram using logic gates, and you now want to build the circuit using switching devices of a given technology (pneumatic, electric, or electronic). To be able to do so you need to know how to translate each logic symbol into a symbol or a collection of symbols of the technology you intend to employ. This chapter is restricted to discussing symbols used for pneumatic valves, electric relays, and CMOS transistors, and to showing how they correspond to logic gates. In general, any quantifiable physical entity (such as voltage, pressure, force, etc.) can be used as a signal. To be independent of the actual physical entity of any binary signal (and its unit of measurement, such as Volt, psi, Newton, etc.) it is convenient and customary to map its two values to the integers 0 and 1. If nothing is said to the contrary, the smaller numerical value of a signal is mapped to 0, while the larger numerical value is mapped to 1, and these two integers are not assigned any unit of measurement. In this book the lower case letters x and y (most often with indices) shall always be used as variables whose range is .{0, 1} (denoted as .x, y ∈ {0, 1}, and meaning that the value of x and that of y is — in any given moment — either 0 or 1), and referred to as a binary numeric variables. These variables, x and y, are used to denote the input and output signals of actual switching devices and their symbols.

2.1 Pneumatic Valves One of the most common pneumatic valves is the spool-and-sleeve design, the principle of which is shown in Figure 2.1. A zero signal refers to atmospheric

The original version of the chapter has been revised. A correction to this chapter can be found at https://doi.org/10.1007/978-3-031-40673-7_21 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024, corrected publication 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_2

13

14

2 Switching Devices

Fig. 2.1 A Normally Open valve

pressure, a one signal to a pressure of say 90 psi (pounds per square inch). If the pilot pressure is 0 the valve is said to be non-actuated and a spring presses the spool in the leftmost position. Actuating the valve by applying a 1-signal to the pilot chamber causes the spool to be pressed to the right (against the force of the spring). In the non-actuated valve you will notice that the output port and the exhaust port are connected while the air supply is blocked. In the actuated valve the output is connected to the air supply and the exhaust is blocked. Let us refer to this valve as a normally open valve (or NO-valve, for short), this naming convention being borrowed from the realm of electric relays. Normal refers to the pilot pressure x being 0 (the valve is not actuated), whereas open refers to the path between the air supply and the output being interrupted. The standard symbol for the NO-valve is that of Figure 2.1(a2). The rectangle to which the spring is attached (in our case, the right rectangle) shows the port connections in the non-actuated (the normal) case, the left rectangle depicts the port connections in the actuated case. I took the liberty of drawing the controlled lines — exhaust, air supply, and output — as being detached from the symbol’s rectangles to better show how the symbol is to be understood in the actuated case: Think of the controlled lines as being static and the rectangles being pushed to the right by the pilot pressure, as indicated in Figure 2.1(b2). When the valve is not actuated, the spring pushes the rectangles back into the position shown in Figure 2.1(a2). In actual use, the controlled lines are always drawn attached to the rectangle (the one with the spring), and the symbol is always drawn in the non-actuated mode. It is never drawn in the actuated mode indicated in Figure 2.1(b2). By convention, all device symbols in a circuit are drawn in their normal state — i.e., their non-actuated mode — which means that they are drawn as if their input signals were zero. An alternate behaviour to the NO-valve is obtained if the air supply and the exhaust are interchanged as shown in Figure 2.2a. When the valve is not actuated the output is connected to the air supply while actuating the valve connects the output to

2.1 Pneumatic Valves

15

Fig. 2.2 A Normally Closed valve

Fig. 2.3 Spool-and-sleeve valves as power amplifiers

the exhaust. This behaviour is described by either of the symbols of Figure 2.2b,c, and the valve is called a normally closed valve or NC-valve. To analyse the behaviour of a spool-and-sleeve valve (Figure 2.3) it suffices to work with the valve symbol. The input-output behaviour of such a valve is best documented in events tables as shown in Figures 2.3(a2) and (b2). According to the events table of Figure 2.3(a2), ‘the output y is 1 iff the input x is 1’ , or (in falling back on Section 1.4) y = 1 ⇔ x = 1, .

Y ⇔ X,

this function being called the IDENTITY, as the output Y is always identical to the input X. The symbol for the gate is shown in Figure 2.3(a3). You will probably not be alone in asking what good the IDENTITY function is, as its output is always the same as its input. Frankly, the IDENTITY function serves no logical purpose. But, it is of real technical importance as a power amplifier. In pneumatics, power is determined as the product of flow times pressure. A low-power pilot-signal can control a high-power air-supply to the output. The behaviour of the NC-valve, on the other hand, is fully documented in the events table of Figure 2.3(b2), and from this follows y = 1 ⇔ x = 0, .

Y ⇔ X,

16

2 Switching Devices

Fig. 2.4 Spool-and-sleeve valves as logic switches

The above function is called a NEGATION, the output always having the complementary value to the input. The air supply of a spool-and-sleeve valve can be replaced by an input signal, as indicated in Figure 2.4. The difference is that the air supply is always 1 while a signal can vary, meaning it can be either 0 or 1. You will hopefully be able to recognise the valves’ behaviour as being documented in the two event tables. The functions associated with these event tables are called AND and INHIBITION. The latter name stems from the fact that .x1 inhibits y, meaning that, when .x1 = 1, the output y is blocked. It is probably routine to you by now to express the events tables of Figures 2.4(a2) and (b2) in the following formal manner: y = 1 ⇔ x1 = 1 AND x2 = 1, .

Y ⇔ X1 ∧ X2 ,

y = 1 ⇔ x1 = 0 AND x2 = 1, Y ⇔ X 1 ∧ X2 .

In the above usages the spool-and-sleeve valve cannot realise the logical OR function, a function no design engineer would like to do without. A cheap and effective realisation of the OR function is the shuttle valve of Figure 2.5a. The ball in the design shown will always seal the input of lower pressure, connecting the output with the input of higher pressure. This valve is called a passive valve as the air of the output signal is derived from an input line, not from an air supply. This is a drawback, as is the fact that the valve does not have an exhaust of its own; the output must be exhausted via an input line. The standard symbol for this valve (Figure 2.5b) is obviously inspired by the design shown in Figure 2.5a. The events table describing the valve’s behaviour marks the valve as a realisation of the OR function. To see that the events table of Figure 2.5c does describe the OR function, read the table row-wise (and not column-wise, as we have been doing up till now). First

2.2 Electric Relays

17

Fig. 2.5 A shuttle valve to realise the OR function

Fig. 2.6 A valve realisation of a logic circuit

note that ‘if .x1 = 1 then .y = 1’ (this covering columns 2 and 3), and then that ‘if x2 = 1 then .y = 1’ (which covers columns 1 and 3). Thus we reason that ‘.y = 1 iff .x1 = 1 OR .x2 = 1’, and write this as .

y = 1 ⇔ x1 = 1 OR x2 = 1, .

Y ⇔ X1 ∨ X2 .

We are now in a good position to transform a given logic circuit into one consisting of valve symbols. Such a transition is shown in Figure 2.6 for the running example of Chapter 1. Of course this example is only used to demonstrate a technique, and has nothing to do with the original problem of switching lights on or off. To replace the INHIBITIONs of Figure 2.6a by NC-valves, note the correspondence laid down in Figure 2.4.

2.2 Electric Relays In its simplest form, an electric relay (see Figure 2.7) consists of a coil, a plunger, and a contact that is actuated by the plunger. When currant flows through the coil its magnetic field attracts the plunger, thereby opening a normally closed contact (Figure 2.7a) and closing a normally open contact (Figure 2.7d). Interrupting the

18

2 Switching Devices

Fig. 2.7 The electric relay: principle operation, IO-table, symbol

currant to the coil allows a spring to return the plunger, and thus the contacts, to their initial positions. The input signal to a relay is the voltage applied to the relay’s coil, this voltage being represented by the binary variable .x ∈ {0, 1} of Figure 2.7. The output signal .y ∈ {0, 1} stands for the transmission of the relay’s contact, i.e., its ability to pass or transmit currant. The transmission y of a closed contact is taken to be 1, that of an open contact is 0. Relays (as those of Figures 2.7ad) are always drawn under the assumption that their input signals are 0, i.e., that zero voltage is applied to their coils — this being referred to as the normal state of the relays. Thus, a normally open contact — a so-called NO-contact — is open (.y = 0) when the relay’s coil is not energised (when .x = 0), and closed (.y = 1) when .x = 1. A normally closed contact — a NC-contact — is closed (.y = 1) when the relay’s coil is not energised (.x = 0), and open (.y = 0) when .x = 1. This behaviour is expressed in the input-output tables (the events tables) of Figures 2.7be. Stating when the outputs y of these tables are 1 is expressed formally in the logic formulas

.

y = 1 ⇔ x = 0,

y = 1 ⇔ x = 1,

Y ⇔ X,

Y ⇔ X,

which show clearly that the relay with a normally closed contact is a NEGATION, while the relay with a normally open contact is an IDENTITY. The symbolic representation of a relay (see Figures 2.7cf) consists of only two things: (a) a contact drawn in its normal state, its transmission y standing for the output of the relay, and (b) an input variable — in our example, x — which is

2.2 Electric Relays

19

Fig. 2.8 Principle of a serial network

the input signal to the coil activating the contact. The symbolic representation of a relay is an anachronism in that it has neither an input lead, nor a clearly defined output lead (such as the logic symbols of Section 1.5, or the pneumatic symbols of Section 2.1 have). As stated for pneumatic valves, by convention, all relays in a circuit are drawn in their normal state which means that they are drawn as if their input signals were zero. The coil of a relay usually activates multiple contacts, some normally open, the others normally closed. A single contact, or two or more interconnected contacts that transmit current to a single load comprise a relay network; for example, see the serial network of Figure 2.8b or the parallel network of Figure 2.9b. The interconnected contacts of a relay network are always drawn in their normal state, assigning to each contact the input variable of the coil activating it (in our cases, the variables x, .x1 , .x2 , etc.). The way the contacts are connected unequivocally determines when the network’s transmission y is 1. The network’s transmission y is usually assigned to the network’s load. A load is any device (such as a lamp, an electric motor, a relay coil, etc.) that receives and dissipates electric energy. The load is always needed to avoid the network from short circuiting. The only relay networks to be touched on here are the serial-parallel networks, well knowing that this does not do the possibilities of relay networks justice. Figure 2.8a shows two relays in series, with a lamp as load. As explained in the above paragraph, the relay network is drawn as in Figure 2.8b. The analysis of the input-output behaviour of Figure 2.8a (or Figure 2.8b, if you prefer) is undertaken in the events table of Figure 2.8c, column 3 of this table stating when y is 1: y = 1 ⇔ x1 = 1 AND x2 = 0, .

Y ⇔ X1 ∧ X2 .

This result lets us conjecture that the serial connection of two contacts is expressed by the logic AND (.∧), and that, vice versa, the logic AND (.∧) is realised by connecting two contacts in series. For our example, this conjecture allows us to

20

2 Switching Devices

Fig. 2.9 Principle of a parallel network

state that the behaviour of the circuits of the Figures 2.8ab is equivalent to that of the gate of sub-figure (d). Figure 2.9a depicts relays connected in parallel, again with a lamp as a load; the symbolic relay network is drawn in Figure 2.9b. From either of these representations we deduce that the circuit’s transmission y is 1 when the transmission of at least one of the contacts is 1; this is expressed in the events table of sub-figure (c) when concentrating only on the transmission y (i.e., for the moment, not considering the variable z). As with the shuttle valve of Figure 2.5, we read the events table of Figure 2.9c row-wise arguing that .y = 1 iff .x1 = 1 (hereby describing columns 3 and 4) OR .x2 = 0 (which covers column 1 and again column 3), y = 1 ⇔ x1 = 1 OR x2 = 0, .

Y ⇔ X1 ∨ X2 .

This result lets us conjecture that the parallel connection of two contacts is expressed by the logic OR (.∨), and that, vice versa, the logic OR (.∨) is realised by connecting two contacts in parallel. For the present example, this conjecture allows us to state that the behaviour of the circuits of the Figures 2.9ab is equivalent to that of the gate of sub-figure (d).

Inverting a Relay Circuit There are two ways to invert a circuit. The simplest and most direct way is to invert the circuit’s output. For instance, suppose we want to invert the circuit of

2.2 Electric Relays

21

Fig. 2.10 Expressing the OR-function using double negation

Fig. 2.11 Transforming the logic circuit of a latch into a relay network

Figure 2.9b. This is done by replacing the lamp — which is the circuit’s output load — by an inverter, that is, by a normally closed relay as shown in Figure 2.10a. The relay’s coil, y, is now the load for the original circuit, while the lamp is the load, z, for the inverter. The values of z are shown in the bottom row of Figure 2.9c — they are the complement of those for y. The second way to invert a circuit is to calculate the inverted circuit directly from the inverted output z of Figure 2.9c, obtaining Z ⇔ X1 ∧ X2 ,

.

which leads to the circuit of Figure 2.10b.

Realising Feedback Feedback plays a dominating role in the design of so-called latches, these being elementary memory circuits. In fact they are so important that a whole Division of this book is devoted to them. In this early stage, we only want to see how to transform a logic feedback circuit, say, that of Figure 2.11a, into a relay circuit. The output OR-gate of sub-figure (a) requires our contact network to consist of two parallel paths, one of which contains the NO-contact .x2 . The AND-gate of sub-figure (a) leads to the serial circuit consisting of the NC-contact .x1 and the NOcontact named .y ∗ . The load of the relay network is the coil y of a relay. To realise the feedback of the latch pictured in sub-figure (a) we define the NO-contact .y ∗ of sub-figure (b) to be actuated by the coil y.

22

2 Switching Devices

The delay .Δt in the logic circuit of sub-figure (a) expresses the fact that the NOcontact .y ∗ follows the coil’s state of activation with the time delay of .Δt. The logic circuit is thus modelled after the relay circuit. But it must be noted that the theory of feedback latches does not require the existence of a delay in the feedback loop.

2.3 CMOS Transistors The transistor was invented in 1948 by John Bardeen, Walter Brattain and William Shockley and soon replaced the vacuum tube as a cheap amplifier. We shall take a look at only one transistor device, of a large proliferation, namely the CMOS technology. The transistors used are insulated-gate field-effect transistors (IGFET) — Figure 2.12 — more commonly called MOS for the sequence of the layers of material used in their design — metal-oxide-silicon. There are two types of these transistors, according to the polarity (negative or positive) of the electric charges used to create a conducting channel between the drain and the source regions: nchannel MOS (NMOS) and p-channel MOS (PMOS). Digital circuits that employ both these transistor types are called complementary MOS (CMOS), and have the advantage of needing only a single level of supply voltage. For current to be able to flow between the drain and the source region, you need to create a conducting channel in the silicon between these regions. This is achieved by applying the proper voltage between the gate and the substrate. In an nchannel MOS you obtain a conducting channel by applying a more positive potential to the gate than to the substrate (the voltage from gate to substrate is positive), thus attracting negative charges to the gate while repulsing positive charges. In a p-channel MOS the conducting channel is obtained by making the voltage from substrate to gate positive, thus attracting positive charges to the gate while repulsing negative charges from the gate. In actual usage, the source is always connected to the substrate. To obtain a channel, the density of source-type charges must be enhanced giving these transistors the name enhancement type transistors. The simplest CMOS circuit is the inverter of Figure 2.13. Its principle of operation is basic to CMOS technology. If you understand how the inverter works, you almost automatically understand the next two gates.The idea behind the inverter

Fig. 2.12 Principle of the n-channel and p-channel MOS

2.3 CMOS Transistors

23

Fig. 2.13 The CMOS inverter

is shown in the relay circuit of Figure 2.13a: The two serial contacts are always in a state of alternate transmission — when one is closed, the other is open. When the top switch, the NC-contact, is closed and the bottom switch, the NO-contact, open, the full supply voltage is fed through to the output y. Vice versa, the output y is zero when the top switch is open and the bottom one closed. The relay circuit translates directly to the transistor circuit of Figure 2.13b, the NC-contact being replaced by a PMOS, the NO-contact by an NMOS. In the circuit as drawn, the gates of both transistors are at ground potential dew to the way the switch governing the gate potential is set. As the substrate of the PMOS is connected to high potential a conducting p-channel is created. The NMOS transistor, on the other hand, is non-conducting as it has 0 voltage between gate and substrate, both being connected to ground. The voltage measurable at the output y is thus high. If the switch is brought to the alternate position, in which the potential at the gates of both transistors is high, the PMOS becomes non-conducting while the NMOS becomes conducting. The output voltage y is thus brought to zero. The actual circuit, of course, is always drawn using MOS symbols, giving you Figure 2.13c. As the input and output of this circuit are always complementary, it is an inverter or a negation, the symbol for which is shown in Figure 2.13d. There are two successful ways, of four trivial ones, to generalise the principle inverter circuit of Figure 2.13a: One of these is shown in Figure 2.14a, the other in Figure 2.15a, and both lead to circuits superbly adapted to CMOS technology. The first generalisation of the inverter of Figure 2.13a is achieved by substituting the single NC-contact by two serial NC-contacts, and substituting the single NOcontact by two parallel NO-contacts — this leading to Figure 2.14a. The equivalent CMOS circuit is obtained by substituting PMOS transistors for NC-contacts, and NMOS transistors for NO-contacts. As with the inverter of Figure 2.13c, note that in Figure 2.14b the sources and substrates of the PMOS transistors are connected to high potential, or at least higher potential, while the sources and substrates of the NMOS transistors are connected to low (or lower) potential.

24

2 Switching Devices

Fig. 2.14 The CMOS NOR-circuit

Fig. 2.15 The CMOS NAND-circuit

The simplest way to analyse the circuit of Figure 2.14 is to develop the events table shown in Figure 2.14c. You start out with the table of input events, leaving the bottom row, the one for the output y, empty. Then, for each input event, you deduce from the behaviour of the circuit the value for the output y, entering the outputs into the bottom row. Considering the result, you will notice that inverting each output value leads to the or function. In other words, the events table of Figure 2.14c is that of a not-or function, usually referred to as the nor function. The logic symbol used for this function is shown in Figure 2.14d; it is simply an OR gate with an inverted output. The second generalisation of the inverter leads to Figure 2.15a and that to Figure 2.15b. The analysis needed to obtain the events table of Figure 2.15c is easy so that I again leave the details to you. The result, as you will have noticed, is the inversion of the and function, i.e. is a not-and function, and is thus referred to as the nand function, the logic symbol for which is shown in Figure 2.15d. Of course the CMOS circuits discussed here are only a small excerpt of the circuits used. But they are basic and absolutely essential to the understanding of the techniques employed. One of the most important omissions is a discussion of circuits with feedback, circuits that lead to the realisation of memory devices — socalled latches and flip-flops. These circuits are discussed, together with their theory, in later chapters.

2.3 CMOS Transistors

25

Fig. 2.16 NOR and NAND versions of our running example

The logic building blocks of CMOS technology are NAND-, NOR- and NEGATION-gates. These, therefore, one wishes to use when realising logic circuits in CMOS technology. To get an impression of pure NOR- and NAND-design, take a look at Figure 2.16. These circuits are equivalent to that of Figure 1.5 of our running example of Chapter 1. Transforming larger AND, OR and NOT based circuits into NOR or NAND circuits can be quite demanding and is discussed in a later chapter.

Chapter 3

Elementary Logic Functions

Elementary logic functions are the building blocks of all logic functions however complicated they may be, and are thus of prime importance. In the following pages we discuss all elementary logic functions and how they interact. Much of this material, such as the summaries of theorems, is only intended as reference to be at hand when needed (e.g., in calculations, or for theoretical considerations).

3.1 Logic Functions An elementary logic function F maps a pair of ordered truth values, FALSITY (for which we write 0) and TRUTH (denoted as 1), into the set .{0, 1} of truth values F : {0, 1}2 → {0, 1}

.

These functions have, at most, two input variables and are summarised in the table of Figure 3.1. The middle column of this table shows function symbols commonly used in switching algebra. The function symbols ∼ ∼ .∧, >, and < for the INHIBITIONs, ∼ ∼ .∨ for NOR, and .∧ for NAND are not all that common — these symbols were introduced by Iverson for APL. Iverson’s notation for the INHIBITIONs was chosen because it is mnemonic and because there is no generally accepted notation for these functions. In the case of NOR and NAND Iverson’s notation is mnemonically clearly superior to the respective notations of Sheffer (.X1 ↓ ∼ ∼ X2 :⇔ X1 ∨ X2 ) and Nicod (.X1 |X2 :⇔ X1 ∧ X2 ) sometimes used. In switching algebra the negation of a variable X is commonly denoted as .X whereas in logic the negation would preferably be written as .¬X.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_3

27

28

3 Elementary Logic Functions

Fig. 3.1 Table of the elementary logic-functions

Happily, it suffices to use sub-sets of the set of elementary connectives, .{¬, ∧, > ∼ ∼ , , , , y because .x4 , being 1, is larger than .y4 , which is 0, while .x3 and .y3 , both being 0, are equal. In the cells 8, 9, 12, 13 .x > y because .x3 > y3 . The 1 in cell .

68

5 Veitch-Karnaugh Maps

Fig. 5.11 Using reduced VK-maps to specify when one 4-digit binary number, .x = x1 · 23 + x2 · 22 + x3 · 21 + x4 · 20 , is greater than another, .y = y1 · 23 + y2 · 22 + y3 · 21 + y4 · 20

14 states that .x > y because .x4 > y4 while .x3 = y3 . Evaluating the discussed VK-map, we get: (2) Zx>y ⇔ X3 Y 3 ∨ (X3 ∨ Y 3 )X4 Y 4 .

.

(5.25)

Now consider the possibility of the two higher order bits being unequal as discussed in Figure 5.11b. If they are equal, which is the case in the main diagonal (2) of the VK-map, then we are referred to the interim result .Zx>y of the prior VK(2) map by entering .Zx>y into the cells of the main diagonal. If the higher order bits are unequal, the VK-map has 0s and 1s entered as in the prior VK-map. When evaluating a reduced VK-map, a VK-map containing map-entered variables, you first calculate the expression for 1-cells. Then every map-entered variable is ANDed with the conjunctive term of a K-set containing the map-entered variable and 1s. The largest such K-sets are shown in Figure 5.11b. The result obtained in our case is: (4) Zx>y ⇔ X1 Y 1 ∨ (X1 ∨ Y 1 )X2 Y 2

.

(2) , ∨ (X1 X2 ∨ Y 1 X2 ∨ X1 Y 2 ∨ Y 1 Y 2 )Zx>y (4) ⇔ X1 Y 1 ∨ (X1 ∨ Y 1 )X2 Y 2 Zx>y (2) . ∨ (X1 ∨ Y 1 )(X2 ∨ Y 2 )Zx>y

The circuit obtained from (5.25) and 5.26 is shown in Figure 5.12.

(5.26)

5.5 Karnaugh Trees & Map-Entered Variables

69

Fig. 5.12 The circuit, derived from (5.25) and (5.26), that emits a 1-signal when a 4-digit binary number, .x = x1 ·23 +x2 ·22 +x3 ·21 +x4 ·20 , is greater than another, .y = y1 ·23 +y2 ·22 +y3 ·21 +y4 ·20

Chapter 6

Adjacency and Consensus

This chapter is a precursor to algebraic minimisation taken up in the next chapter. The operators and methods developed here are inspired by the visual methods used in evaluating VK-maps. It is helpful to have a clear intuitive grasp of the concepts we want to use formally, so, in Section 1 we introduce the two prime concepts, adjacency and consensus, in a visual and informal way. Only in the next two sections do we define these concepts formally. The last section, Section 4, is devoted to the question, when one K-set is a (proper) subset of another.

6.1 Adjacent K-Sets and their Consensus Adjacency is a property associating two K-sets, this property allowing us to define a third and special K-set called a consensus, its elements calved out of the adjacent K-sets.

.

Informally, two K-sets are said to be adjacent if they are disjoint (i.e., have no elements in common), and if they have, at least partially, a boarder or edge in common in a VK-map.

(6.1)

Adjacency was originally conceived for elementary K-sets .Ke , as those pictured in Figure 6.1a, where .K0 and .K2 , as well as .K5 and .K13 are adjacent. This original adjacency concept of two elementary K-sets is extended to the adjacency of any two K-sets as stated above. The K-sets .(KA , KB ), .(KA , KC ), .(KA , K14 ), .(KB , KC ) of Figure 6.1b show that adjacent K-sets need not be of the same size and need not be aligned. The criterion of importance is for them to have, at least partially, a boarder in common. Just to round off the picture, no two K-sets of the three shown in Figure 6.2a are adjacent as they are not pairwise disjoint. Neither are any of the elementary K-sets of Figure 6.2b adjacent as no two have a boarder in common. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_6

71

72

6 Adjacency and Consensus

Fig. 6.1 Examples of adjacent K-sets

Fig. 6.2 Examples of non-adjacent K-sets

Now, to get a first impression of the consensus, take a look at Figure 6.3 which shows all but one of the consensuses for the sets of Figure 6.1. You might want to see if you can find the general rule by which to find or define consensuses — if so interrupt your reading here . . .

.

The consensus of two given adjacent K-sets is the largest K-set that can be constructed exclusively of elements of both the given adjacent K-sets.

(6.2)

By this definition, you will notice that .K0,2 is the consensus of .(K0 , K2 ) while K5,13 is the consensus of .(K5 , K13 ) as shown in Figure 6.3a. This also gives a first impression of how adjacent K-sets are incorporated into a larger K-set, one called consensus. The incorporated K-sets are proper subsets of the consensus, and of no further relevance in a minimisation process. For this reason we shall later search for and eliminate K-sets that are proper subsets of other K-set.

.

6.1 Adjacent K-Sets and their Consensus

73

Fig. 6.3 Examples of consensuses

Fig. 6.4 K-sets that are not consensuses

In Figure 6.3b you can see how the K-set .K14 is incorporated into the consensus KA,14 , and .KC is incorporated into the consensus .KAC . Not always is a consensus successful in incorporating one or both of the K-sets of which it is constructed as you can see by viewing .KAB . The one K-set missing in Figure 6.3b, the K-set .KBC = {1, 9} of .(KB , KC ), was only left away so as not to overload the picture. In contrast to the consensus examples of Figure 6.3, the examples of Figure 6.4 present counter examples. .KU cannot be the consensus of .KD and .KF , for these K-sets — not being disjoint — do not have a consensus. .K9 and .K12 , on the other hand, are disjoint. But, not being adjacent, they also have no consensus, so that .KV cannot be their consensus. Another argument can also be put forward: .KV cannot be the consensus of .K9 and .K12 as .KV does not consist solely of elements of .K9 and .K12 . But, why is neither .KX nor .KY of Figure 6.4b a consensus? .KX is not the largest K-set whose elements are taken exclusively from .KA and .KC — whereas .KAC (as shown in Figure 6.3b) is. We can state this more simply by saying: As .KX is a subset of .KAC , .KX cannot be the consensus. .KY , on the other hand, is too big: It contains an element that belongs neither to .KA nor to .KC .

.

74

6 Adjacency and Consensus

6.2 Formalising Adjacency All the information on a generalised minterm .Cϕ1 ...ϕn and on a K-set .Kϕ1 ...ϕn is contained in their index n-tuple, the multiple input event .(ϕ1 , . . . , ϕn ) in which each .ϕi stands for one of the symbols 0,1,.xi . Therefore, it suffices to work with the index n-tuples .(ϕ1 , . . . , ϕn ) of the generalised minterms or the K-sets instead of with the generalised minterms or K-sets themselves. And this we shall subsequently do when formalising the concepts of adjacency and consensus. The formal definition of the adjacency of elementary K-sets .Ke and .Kε is quite simple: They are adjacent if their n-tuples .(e1 , . . . , en ) and .(ε1 , . . . , εn ) differ in exactly one position. But this definition does not point in the right direction when trying to generalise the concept of adjacency to fit all K-sets: We need to stress the difference in a certain position of the n-tuples, not the equality of all other positions. We therefore reformulate the above definition as follows: Two elementary K-sets Ke and Kε are said to be adjacent if there exists one and only one mutual position in their n-tuples . e := (e1 , . . . , en ) and ε := (ε1 , . . . , εn ), with each ei , εi ∈ {0, 1}, in which one n-tuple has the value 0, the other the value 1.

(6.3)

We now extend this definition to apply to general K-sets. Two arbitrary K-sets Kϕ and Kψ are said to be adjacent if there exists one and only one mutual position in their n-tuples ϕ := (ϕ1 , . . . , ϕn ) and ψ := (ψ1 , . . . , ψn ), with each ϕi , ψi ∈ . {0, 1, xi }, in which one n-tuple has the value 0, the other the value 1.

(6.4)

The only difference between these definitions, (6.3) and (6.4), are the ranges {0, 1} and .{0, 1, xi } of the n-tuples’ variables (while their names—e, .ε, .ϕ, or .ψ— are irrelevant). When using the range .{0, 1}, equally positioned elements in the n-tuples .(e1 , . . . , en ) and .(ε1 , . . . , εn ) must be equal (both 0, or both 1) if they are not complementary. On the other hand, when using the range .{0, 1, xi }, and considering those positions of the n-tuples .(ϕ1 , . . . , ϕn ) and .(ψ1 , . . . , ψn ) that are not complementary, they can contain equal symbols (both 0, both 1, or both .xi ), or the symbol pairing 0 and .xi , or 1 and .xi . The definition (6.4) subsumes (6.3). To express (6.4) more concisely, we introduce the adjacency operator, . α , as defined in Figure 6.5. Its arguments vary over the range .{0, 1, xi }, producing 0 or 1 as shown in the body of the table. As the table mirrors across its main diagonal, the operator is commutative. Applying the adjacency operator . α to the n-tuples (or vectors) .ϕ and .ψ is explained as applying it to equally positioned elements of the n-tuples (operators with this property are called scalar operators). The result is referred to as the adjacency vector .α:

.

6.2 Formalising Adjacency

75

ψi

Fig. 6.5 Defining the adjacency operator . α . The body of the table defines the value of each component .αi := ϕi  α ψi of the adjacency vector .α := (α1 , . . . , αn ).

K0 : 0 0 0 0 K2 : 0 0 1 0 α: 0 0 1 0 (a) adjacent

ϕi

K14 : 1 1 1 0 KC : 1 0 x3 1 α: 0 1 0 (b) disjoint

1

KD : KF :

0 0 1 0

 α 0 1 xi 0 x1

x2 x2

1 1 0 0

xi 0 0 0

x3 1 1 x4

α: 0 0 0 (c) non-disjoint

0

Fig. 6.6 Examples of adjacent, disjoint, and non-disjoint K-sets

.

α := ϕ  α ψ := (ϕ1  α ψ1 , . . . , ϕn  α ψn )

(6.5)

Presumably, no further explanation is necessary to see that adjacency, as defined in (6.4), can now be expressed with the help of the adjacency vector as follows: Two arbitrary K-sets Kϕ and Kψ are adjacent if the .

adjacency vector α = ϕ  α ψ contains a single 1.

(6.6)

It is of course reassuring to affirm that the formal description of adjacency, (6.6), coincides with its intuitive concept as put forth in the previous section. To this end consider the pairs of K-sets in Figure 6.6. Do note that the number of 1s in adjacency vector .α of the table of Figure 6.6a is 1, and that this successfully reflects the fact that the elementary K-sets .K0 and .K2 , pictured in Figure 6.3a, are adjacent. The adjacency vector in the table of Figure 6.6b has two 1s so that, according to (6.6), we cannot expect the K-sets .KC and .K14 to be adjacent although these K-sets are disjoint, as you can see in Figure 6.1b. The last example in Figure 6.6, the table in Figure 6.6c, has an adjacency vector .α containing no 1s. The associated K-sets, .KD and .KF are not disjoint, as a glance at Figure 6.2a shows. The examples of Figure 6.6 are put into perspective by the following Theorem on disjoint K-sets : Two K-sets Kϕ1 ...ϕn and Kψ1 ...ψn are .

disjoint iff the adjacency vector (α1 , . . . , αn ) of the index n-tuples (ϕ1 , . . . , ϕn ) and (ψ1 , . . . , ψn ) contains one or more 1s.

(6.7)

76

6 Adjacency and Consensus

which, together with (6.6), shows that adjacent K-sets are disjoint, but that not all disjoint K-sets are adjacent. As the proof of this theorem is quite simple, I leave it to you—but I do encourage you to take the necessary time out for the proof.

6.3 Formalising Consensus Formalising the intuitive concept of consensus, as put forth in (6.2), is easier than one might think when first confronted with the problem. We start by expressing some adjacent K-sets of Figure 6.3 and their consensuses by their defining n-tuples which we do in Figure 6.7 reading the displayed vectors from Figure 6.3. To obtain a simpler notation, the indices of the variables .xi are omitted. Think of the quadruples of the first two rows of the tables of Figure 6.7 as being instances of .(ϕ1 , ϕ2 , ϕ3 , ϕ4 ) and .(ψ1 , ψ2 , ψ3 , ψ4 ). Which of these general quadruples of adjacent K-sets represents the first row, and which the second, is irrelevant as any consensus operator to be developed will most certainly be commutative. Each quadruple of the bottom row represents the consensus, the general form of which we write as .(κ1 , κ2 , κ3 , κ4 ). Each of the tables of Figure 6.7 contains at least some of the information needed to evaluate a component, .κi , of a consensus vector from the equally positioned components, .ϕi and .ψi , of adjacent K-sets. For instance, consider the table of Figure 6.7a: When .ϕi and .ψi are 0 (as in columns 1, 2, and 4), the associated component .κi of the consensus is 0. On the way to defining a consensus operator, we draw a table such as that of Figure 6.8a, the rows of which stand for .ϕi ∈ {0, 1, x}, and the columns for .ψi ∈ {0, 1, x}; the body of the table contains the associated values of .κi . According to columns 1, 2 or 4 of Figure 6.7a, we thus enter 0 for .κi into cell .(ϕi , ψi ) = (0, 0) of Figure 6.8a. Translating column 3 of Figure 6.7a into the table of Figure 6.8a allows us to enter the symbol x into the cell .(ϕi , ψi ) = (0, 1). As .ϕi and .ψi can be interchanged (the consensus operation is of necessity commutative) x must also be entered into cell .(ϕi , ψi ) = (1, 0). Considering the columns of all the tables of Figure 6.7 allows us to complete the consensus table as shown in Figure 6.8b, and it is this table that we use to define the consensus operator . κ. K0 : 0 0 0 0 K2 : 0 0 1 0

KA : KB :

x 1 x 1 0 0 0 x

KA : KC :

x 1 x 1 1 0 x 1

K0,2 : 0 0 x 0 (a)

KA,B :

0 x 0 1 (b)

KA,C :

1 x x 1 (c)

Fig. 6.7 Expressing adjacent K-sets and their consensus of Figure 6.3 by their defining quadruples

6.4 When is one K-set a Subset of another?

77

ψi

ψi

ϕi

 κ 0 1 x

0 0 x

1 x

x ϕi

0 0 xi 0

 κ 0 1 xi

(a)

1 xi 1 1

xi 0 1 xi

(b)

Fig. 6.8 Developing and defining the consensus operator . κ . The body of the complete table defines the value of each component .κi := ϕi  κ ψi of the concensus vector .κ := (κ1 , . . . , κn ).

KD : KF :

0 x x 0 x x 1 x

K9 : 1 0 0 1 K12 : 1 1 0 0

KU :

0 x 1 1

KV : 1 x 0 x

Fig. 6.9 Applying the consensus operator to two non-adjacent K-sets produces a K-set that we do not accept as a consensus — see Figure 6.4a.

Applying the consensus operator . κ to the n-tuples (or multiple events) .ϕ and ψ of adjacent K-sets .Kϕ and .Kψ is explained as applying it to equally positioned elements of the n-tuples, the result being the consensus vector .κ,

.

.

κ := ϕ  κ ψ := (ϕ1  κ ψ1 , . . . , ϕn  κ ψn )

(6.8)

The consensus operator . κ allows us to define the concept of consensus formally: The K-set Kκ is the consensus of the adj acent K-sets Kϕ and Kψ iff .

κ =ϕ κ ψ and  κ is defined by the table of Figure 6.8b.

(6.9)

Note that a consensus is only calculated from adjacent K-sets. If you apply the consensus operator . κ to two K-sets that are not adjacent, the K-set you get will not be a consensus. For example, see Figure 6.9.

6.4 When is one K-set a Subset of another? In the process of finding all prime sets, we wish to eliminate those K-sets incorporated into (i.e., which are subsets of) a consensus. It is thus necessary to have an algorithm, a formal procedure, by which to decide if a given K-set is a subset of another. To formulate such a procedure, we introduce the subset operator . σ , as defined in Figure 6.10.

78

6 Adjacency and Consensus

ψi

Fig. 6.10 Defining the subset operator . σ . The body of the table defines the value of each component .σi := ϕi  σ ψi of the subset vector .σ := (σ1 , . . . , σn ).

ϕi

0 1 0 0

 σ 0 1 xi

1 0 1 0

xi 1 1 1

K5 : K5,13 :

0 x1

1 1

0 0

1 1

KC : KA,C :

1 1

0 x3 x2 x3

1 1

K5 ⊆ K5,13 :

1

1

1

1

KC ⊆ KA,C :

1

1

1

1

0 x2 0 1 x1 1 x3 1

KB : KA :

0 x1

0 1

0 x4 x3 1

KB ⊆  KA :

1

0

1

KA,B : KA : KA,B ⊆  KA :

1

0

1

1

0

Fig. 6.11 Illustrating (6.11)

As with the other operators — the adjacency operator . α and consensus operator  κ — its arguments vary over the range .{0, 1, xi }. But, . σ is not commutative. Applying the subset operator . σ to the n-tuples (or vectors) .ϕ and .ψ is explained as applying it to equally positioned elements of the n-tuples (so it is a scalar operator). The result is referred to as the subset vector .σ :

.

.

σ := ϕ  σ ψ := (ϕ1  σ ψ1 , . . . , ϕn  σ ψn )

(6.10)

Employing the subset operator, the claim is that

A K-set Kϕ is a subset of a K-set Kψ — symbolically, Kϕ ⊆ Kψ — .

iff the subset vector σ = ϕ  σ ψ consists exclusively of 1s.

(6.11)

We shan’t prove (6.11) in this text because the predicate logic required has not been covered. But do take a look at the illustrating examples of Figure 6.11 — .K5 ⊆ K3,13 and .KC ⊆ KA,C on the one hand, and .KA,B ⊆ KA and .KB ⊆ KA on the other.

Chapter 7

Algebraic Minimisation

Minimising relatively small switching functions (of no more than, say, six input variables) can be done quite efficiently by VK-map. Larger problems should be solved by computer. To be able to write computer programs for minimisation, we need to (a) develop an algorithm by which to find a full cover, and (b) a further algorithm by which to select a minimal cover from the full cover. There are a large number of algorithms for finding a full cover. The one put forth in Section 1 was developed by Vingron [2004]. It is based on the concept of consensus, so that we need not start with a canonical normal form. Section 2 presents Vingron’s [2004] procedure for finding a minimal cover. Both algorithms — the one for finding the full cover, and the one for finding the minimal cover — are well suited for developing computer programs; but such programs are not contained in this book, as it is not intended to give preference to any computer language.

7.1 Finding the Full Cover The first step to take in minimising the logic expression of any given set of setting input events .E1 (or .E0 , if you so wish to work with the resetting input events) is to find the full cover of prime sets of .E1 which we denote as .Pmax (E1 ). You can specify .E1 in any way you find appropriate. For the example of this section we want to minimise the function depicted in Figure 7.1 in which we have pre-collected the setting input events of .E1 into the K-sets .K011010 , .K0x1000 , .K1x1100 , .K111x10 , .K11x10x , .Kx1x00x , .K1xx0xx . Now, in formulating the algorithm, we associate it closely to our example and its further development in the following figures.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_7

79

80

7 Algebraic Minimisation

Fig. 7.1 Visualising a function to be minimised

Iterative consensus algorithm for finding the full cover Pmax (E1 ) of the setting input events E1 . (a) Start by listing the multiple input events (ϕ1 , · · · , ϕn ) of K-sets that cover E1 (e.g., list 1 of Figure 7.2). The duplicates of multiple input events in this list are dropped, as are all the multiple input events which, according to (6.11), specify K-sets that are subsets of other K-sets of the list. Call the resulting list a Basic List (again, list 1 of Figure 7.2). (b) Each of the multiple input events of the Basic List is checked against the .

others for adjacency according to (6.6). For each pair of multiple input events that are adjacent you establish their consensus according to (6.9) appending it to the Basic List (see list 2 of Figure 7.2). (c) The duplicates of multiple input events in the last list are dropped, as are all the multiple input events which, according to (6.11), specify K-sets that are subsets of other K-sets of the list. This provides us with a new Basic List (e.g., list 3 of Figure 7.2). (d) Repeat steps (b) and (c) until two successive Basic Lists are identical, in which case the algorithm terminates.

(7.1)

7.1 Finding the Full Cover Fig. 7.2 Illustrating step (a) and the iterative steps (b) and (c) of (7.1)

81

List 1 a b c d e f g

011010 0x1000 1x1100 111x10 11x10x x1x00x 1xx0xx

List 2 a b c d e f g a κb a κd a κf a κg b κg c κd c κf c κg d κe d κf e κf e κg

011010 0x1000 1x1100 111x10 11x10x x1x00x 1xx0xx 0110x0 x11010 0110x0 x11010 xx1000 1111x0 111x00 1x1x00 1111x0 1110x0 11xx0x 11xx0x

List 3 d f g a κb a κg b κg c κd c κg e κg

List 1

List 2

List 3

List 4

Cϕ1 ...ϕ6

011010 0x1000 1x1100 111x10 11x10x x1x00x 1xx0xx

111x10 x1x00x 1xx0xx 0110x0 x11010 xx1000 1111x0 1x1x00 11xx0x

x1x00x 1xx0xx xx1000 1x1x00 11xx0x 111xx0 x110x0

x1x00x 1xx0xx xx1000 1x1x00 11xx0x 111xx0 x110x0

X2 X 4 X 5 X1 X 4 X3 X 4 X 5 X 6 X 1 X3 X 5 X 6 X 1 X2 X 5 X 1 X2 X3 X 6 X 2 X3 X 4 X 6

111x10 x1x00x 1xx0xx 0110x0 x11010 xx1000 1111x0 1x1x00 11xx0x

Fig. 7.3 Results of the iteration

List 1 of the table of Figure 7.3 is the original basic list of our example, obtained by step (a), while lists 2 to 4 are the successive basic lists each obtained by the iterative steps (b) and (c). This example represents one of those relatively rare cases where the result is a single irredundant cover, meaning that .Pmax (E1 ) = Pmin (E1 ). I encourage you to visualise this result in a VK-map as that of Figure 7.1. However, in general, the resulting list contains an abundance of prime sets allowing us to choose from them a number of different covers. Of these the minimal covers are of prime interest with respect to minimisation.

82

7 Algebraic Minimisation

7.2 Finding Minimal Covers The procedure presented in this section allows us to directly find a minimal cover Pmin (E1 ) ⊆ Pmax (E1 ) among all the prime sets. In fact, the first cover found is a minimal cover, allowing us to terminate the search if we wish. This method assumes we have determined the full cover .Pmax (E1 ), for instance, by the iterative consensus algorithm (7.1) of the previous section. Let us use a simple example of a switching function of four input variables for which .E1 = {2, 3, 5, 6, 7, 9, 11, 13}, and further assume we have determined all its prime implicants — in this simple case, possibly with the help of a VK-map (Figure 7.4):

.

.

PA = {5, 7}

PB = {9, 11}

PC = {3, 11}

PD = {5, 13}

PE = {9, 13}

PF = {2, 3, 6, 7}

We start the procedure by writing the table of covered input events, as shown in Figure 7.5a (the equivalent to McCluskey’s [1956] prime implicant table). Each row represents a prime set .Pϕ , while each column stands for an input event e of the setting input events .E1 . The intersection of a column e and a row .Pϕ contains the truth value (0 or 1) of the proposition .e ∈ Pϕ . A table of covered input events can be simplified if it contains one or more essential prime sets. An essential prime set is a prime set that contains at least one input event e — call it a unique input event — not contained in any other prime set. The column of a unique input event contains only a single 1, the row in which it occurs marking the essential prime set. An essential prime set is an element of every irredundant cover. Having found the essential prime sets, we need not search for them any further, and may thus remove them from the table of prime sets. Removing all essential prime sets from the table allows us to also remove the columns of all the input events that are elements of the essential prime sets. Our table of covered input events contains .PF as the only

Fig. 7.4 The maximum and minimum covers of a given function

7.2 Finding Minimal Covers

83

Fig. 7.5 Table of covered input events

essential prime set. Removing it and the columns of all input events it contains leads to Figure 7.5b. The last step is to develop the table of possible covers — see Figure 7.6 — for the prime sets of the reduced table of covered input events. The basic idea is to test for each union (.∪) of combinations of prime sets, which such union contains all setting input events e of .E1S of Figure 7.5b. Figure 7.6 details how such a test can be carried out. The combinations of prime sets are organised into groups as shown in the column ‘Possible covers of .E1S ’ . Group 1 contains per row a single prime set of the five prime sets .PA , PB , PC , PD , PE — there are thus 5 . 1 = 5 rows in group 1 actually making it a repetition of Figure 7.5b. Group 2 contains, row-wise, all combinations (without repetition) of two of the prime sets from .PA , PB , PC , PD , PE so that there are . 52 = 10 rows in group 2. Similarly,  group 3 consists of the . 53 = 10 rows of the combination (without repetition) of three of the prime sets of .PA , PB , PC , PD , PE , and so on for all remaining groups. Now concentrate on the columns marked by the input events e — 5, 9, 11, 13 — of .E1S . The 1 s state which input events e are elements of the union of combinations of prime sets. For group 1 of Figure 7.6 the 1 s for which .e ∈ E1S are copied from Figure 7.5b. The logic vectors in the rows of the .E1S -column are obtained by scalar disjunction of those vectors indicated in the combinations. Scalar disjunction of logic vectors refers to the disjunction of equally indexed elements. E.g., the logic vector of row 6, .(1, 1, 1, 0), is the scalar disjunction of the vectors of rows 1 (for .PA ) and 2 (for .PB ) — .(1, 0, 0, 0) ∨ (0, 1, 1, 0) ⇔ (1, 1, 1, 0). Results, once calculated, may be used again. For instance, interpreting .PA ∪ PB ∪ PE of row 18 as .(PA ∪ PB ) ∪ PE allows us to calculate its logic vector from rows 6 (for .PA ∪ PB ) and 5 (for .PE ) — .(1, 1, 1, 0) ∨ (0, 1, 0, 1) ⇔ (1, 1, 1, 1). The table is developed row by row, until the first logic vector containing only 1 s is encountered. The prime sets of the union are a partial1 minimal cover .Pmin (ES ) — see row 11. If there are more minimal covers than one, they are all to be found

1 the

essential covers having not yet been considered.

84

7 Algebraic Minimisation

Fig. 7.6 Table of all possible covers

in the same group. So, if you want to find all minimal covers, you let the algorithm continue till the end of the group. The complete result is obtained by augmenting the covers found here by the previously omitted essential covers — in our example by .PF — giving us .Pmin (E1 ) = {PB , PD , PF }, the result visualised in Figure 7.4b. If you feel a desire to find all irredundant covers, continue with the algorithm as shown in Figure 7.6, discarding those solutions whose subsets have already proven to be irredundant covers. Thus, row 17 is not accepted as a solution as .{PB , PD } of row 11 is a subset of .{PA , PB , PD } of row 17 — .{PB , PD } ⊂ {PA , PB , PD }.

7.3 Minimisation considering Don’t Cares

85

For the example .E1 = {2, 3, 5, 6, 7, 9, 11, 13} of this section, we found the minimal cover to be .Pmin (E1 ) = {PB , PD , PF } which is more precisely expressed as .Pmin (E1 ) = {P10x1 , Px101 , P0x1x }. Then, by the evaluation formulas of (5.19) we may write the disjunctive form as 

Y (x) ⇔

Cϕ (x) ⇔ C10x1 ∨ Cx101 ∨ C0x1x

ϕ:Pϕ ∈Pmin (E1 )

.

(7.2)

⇔ X1 X 2 X4 ∨ X2 X3 X4 ∨ X1 X3 . For the same example, knowing .E1 , we can write its complement .E0 as E0 = {0, 1, 4, 8, 10, 12, 14, 15}. From this, we can calculate the minimal cover .Pmin (E0 ) = {P000x , P111x , Pxx00 }, which I encourage you to do. Then, again (5.19) enables us to write the conjunctive form of the given function: .

Y (x) ⇔ .



Dϕ (x) ⇔ D000x D111x Dxx00

ϕ:Pϕ ∈Pmin (E0 )

(7.3)

⇔ (X1 ∨ X2 ∨ X3 )(X1 ∨ X2 ∨ X 3 )(X3 ∨ X4 ).

7.3 Minimisation considering Don’t Cares Some switching problems don’t allow us to define output values for all input events. We call such problems incompletely specified, referring to the unspecified output values as “don’t cares” and denoting them as .δx , the index x marking the input event whose output value is unspecified. The values — 0 or 1 — of each .δx can be chosen independently of one another. But do note that to calculate a circuit for an incompletely specified problem each .δx must be given a specific value. Most frequently the index x is omitted. To see how to cope with incompletely specified problems let us discuss Figure 7.7 for which we want to develop a minimal cover, .Pmin (E1 ), of the setting input events, .E1 . As before, .E1 is the set of all input events x whose cells in a VK-map contain an output value of 1 — in our example .E1 = {3, 4, 5, 6, 7, 8, 9, 10}. Take .Eδ to be the set of the don’t care input events, in our case .Eδ = {0, 1, 2, 13, 14, 15}. We solve the problem in two steps. First, we assign all don’t care outputs — .δ0 , .δ1 , .δ2 , .δ13 , .δ14 , .δ15 — the value of 1, effectively working with an new set, call it .E1δ , of setting input events .E1δ = {3, 4, 5, 6, 7, 8, 9, 10; 0, 1, 2, 13, 14, 15}. For this set we develop the maximum cover of prime sets, .Pmax (E1δ ), preferably, by using the iterative consensus algorithm (7.1) as shown in Figure 7.8. In the second step we search for and drop those prime sets of .Pmax (E1δ ) that are redundant in the sense that dropping them leaves all setting input events .x ∈ E1

86

7 Algebraic Minimisation

Fig. 7.7 An incompletely specified function we wish to minimise. The unspecified output values are marked as .δ for ‘don’t care’

E1δ

List 1

List 2

List 3

List 4

Pmax (E1δ )

0 1 3 2 4 5 7 6 13 15 14 8 9 10

0000 0001 0011 0010 0100 0101 0111 0110 1101 1111 1110 1000 1001 1010

000x 00x0 0x00 x000 00x1 0x01 x001 001x 0x11 0x10 x010 010x 01x0 01x1 x101 011x x111 x110 11x1 1x01 111x 1x10 100x 10x0

00xx 0x0x x00x 0xx0 x0x0 0xx1 xx01 0x1x xx10 01xx x1x1 x11x

x00x x0x0 xx01 xx10 x1x1 x11x 0xxx

Px00x = PA Px0x0 = PB Pxx01 = PC Pxx10 = PD Px1x1 = PE Px11x = PF P0xxx = PG

Fig. 7.8 Table of the iterative consensus method for developing the full cover .Pmax (E1δ ) = {PA , PB , PC , PD , PE , PF , PG } of .E1δ

still covered by at least one (remaining) prime set of .Pmax (E1δ ). The search starts by developing the table of covered input events shown in Figure 7.9.

7.3 Minimisation considering Don’t Cares

87

Fig. 7.9 Table of covered input events showing which setting input events .x ∈ P1 are covered by prime sets of .Pmax (E1δ )

Fig. 7.10 Reduced table of covered input events

In this table we look for essential prime sets, dropping them and all columns containing their input events. The only essential prime set is .PG , its input events 3 and 4 not being elements of any of the other prime sets. By dropping the row for .PG , and the columns for the input events .3, 4, 5, 6, 7 we obtain the reduced table of covered input events of Figure 7.10a. There are two conspicuous rows in Figure 7.10a — those for .PE and .PF , both containing only 0 s. These rows represent redundant prime sets. Eliminating them leads to the reduced table of Figure 7.10b. The table of all possible covers, Figure 7.11, is developed as explained in the previous section. As we are only interested in minimal covers, the first already occurring in group 2, we need develop the table for no further group. After augmenting the thus found results by the essential prime set .PG we, as expected, obtain the same complete result as from the VK-map of Figure 7.7. (a) Pmin (E1 ) = {PA , PB , PG } = {Px00x , Px0x0 , P0xxx },

.

(b) Pmin (E1 ) = {PA , PD , PG } = {Px00x , Pxx10 , P0xxx },

.

(c) Pmin (E1 ) = {PB , PC , PG } = {Px0x0 , Pxx01 , P0xxx }.

.

88

7 Algebraic Minimisation

Fig. 7.11 Table of all possible covers

For these minimal covers you can then use the AND-to-OR evaluation formula (5.19) to obtain the results: Y (a) ⇔





.

 Cϕ ⇔ X 2 X 3 ∨ X 2 X 4 ∨ X 1 ,

(a)

ϕ:Pϕ ∈Pmin (E1 )

Y (b) ⇔





 Cϕ ⇔ X 2 X 3 ∨ X3 X 4 ∨ X1 ,

(b)

ϕ:Pϕ ∈Pmin (E1 )

Y (c) ⇔





 Cϕ ⇔ X 2 X 4 ∨ X2 X4 ∨ X1 .

(c)

ϕ:Pϕ ∈Pmin (E1 )

The parentheses in the above formulas are only employed to increase readability. Although the formulas for .Y (a) , .Y (b) , and .Y (c) differ, their values are always identical which warrants using the same variable Y in all three cases.

Chapter 8

Design by Composition

The normal forms discussed in Chapter 4 allow us to design combinational circuits using only basic elementary connectives: The canonical normal forms employ AND, OR, and NOT, while the Zhegalkin normal forms realise circuits with XOR and AND or, respectively, with EQU and OR. Taking circuit design to a higher level, we would want to use the normal forms to design small circuit modules if only we had a procedure that would allow us to put these modules together so as to realise a specified problem. Such a process is referred to as composing a circuit (using previously developed and usually well-tested modules). In the simplest case, these modules can themselves be elementary connectives, e.g., NAND or NOR. But the power of a composition procedure lies in its ability to cope with more complicated modules. The composition procedure presented here is taken from Vingron [2004]. The converse to composition is decomposition, a procedure developed by Ashenhurst’s [1957] in which he showed how to split a switching function into two smaller functions, each, hopefully, being easier to realise.

8.1 The Basic Concept To simplify explanations, we shall always use one and the same function as a module in each composition step of an example. As our main considerations turn on this function, we choose to call it a hinge function. For example, say we want to develop a circuit that realises the EQUIVALENCE specified in Figure 8.1a. Using NOR intuitively as a hinge function, we could (with luck) develop the circuit pictured in Figure 8.1b. On the other hand, taking the normally closed pneumatic valve as a hinge function, we could realise the EQUIVALENCE as shown on Figure 8.1c. Of course, using a single hinge function is only possible if it is functionally complete. The basic idea is to develop the circuit from the output to the inputs. This is emphasised in Figure 8.2a where the specified function f is decomposed into the hinge function h at the output of the circuit, and the so-called generic function © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_8

89

90

8 Design by Composition

Fig. 8.1 Realisations by single hinge functions Fig. 8.2 Decomposing the specified function f into the hinge function h and the generic function g

g the outputs of which are used as inputs to the hinge function. The name generic function is chosen to remind us that there is not a singe, uniquely defined function g that we need only calculate or determine, but rather, g is a function we can choose from a whole class of functions that together with the hinge function h are a realisation of the specified function f . The generic function g is a multiple-output function with as many outputs as the hinge function h has inputs. The mathematician refers to the functions h and g of Figure 8.2 as being catenated, or chained together, to create the function f . Catenation can be pictured very descriptively using mappings so that we can do all our reasoning graphically.

8.2 Catenation The word catenation originates from ‘catena’, the Latin word for chain. As a technical term, catenation refers to the chaining together of two functions. To get a feeling for this chaining together of functions consider the example shown in Figure 8.3. The functions we wish to catenate (not identical with the equally named functions of the previous section), .g : X → Y and .h : Y → Z, are defined in Figure 8.3a, this representation being possible because .Y is simultaneously the co-domain of .g : X → Y and the domain of .h : Y → Z. While Figure 8.3a visualises the process of catenating, Figure 8.3b shows the result of the catenation, the composite function .hg : X → Z. The composite function hg is constructed by following the arrows from .X via .Y to .Z. For instance, if we follow the arrow in Figure 8.3a from .x1 we come to .y2 , and from here via the next arrow to .z1 . The

8.2 Catenation

91

Fig. 8.3 An example demonstrating the catenation of functions

composite function hg summarises these successive mappings, simply stating that x1 maps to .z1 . In a formal definition of catenation, we want to make sure that arrows are correctly chained by only using those elements of .Y that are images of g, i.e., those y for which .y = g(x) holds. We thus formulate

.

Given arbitrary functions g : X → Y and h : Y → Z, their catenation hg : X → Z (hg being read as ‘h follows g’) is specified as the set hg of ordered pairs (x, z) such that .   (x, y) ∈ g ∧ (y, z) ∈ h (x, z) ∈ hg :⇔

(8.1)

y∈Y

for any substitution instances x ∈ X and z ∈ Z. The whole definition (8.1) is often abbreviated by the picturesque notation hg

g

h

X → Z = X → Y → Z,

.

(8.2)

which emphasises that .Y is simultaneously the co-domain of g and the domain of h. Without further proof, let me state that the above definition allows us to deduce the following explanation for the conventional notation .h(g(x)):   h g(x) = (hg)(x).

.

(8.3)

The principle property of catenation is associativity, without which the catenation of circuits would not work. Associativity means, given three functions such that g

h

r

X → Y → Z → W,

.

the sequence in which catenation is carried out is irrelevant: rhg = r(hg) = (rh)g.

.

(8.4)

92

8 Design by Composition

8.3 Visualising the Composition Problem When composing a specified function f , we first need to decide on, and sometimes analyse, the hinge function(s) we wish to employ. If we want to use the normally closed valve as hinge function (as in Figure 8.1), we must develop its Karnaugh map. Applying all possible values, 0 and 1, for the input variables .y1 , .y2 , and .y3 (Figure 8.4ab) allows us to obtain the output values for z entered into the VK-map of Figure 8.4c. From this VK-map we read the logic formula .Z ⇔ (Y 1 ∨ Y2 )Y3 , which in turn leads to the composite symbol of Figure 8.4d. For future reference, and this will turn out to be the prime objective of the analysis of the hinge function, we read from the VK-map the set .Y0 of input events .(y1 , y2 , y3 ) that cause the output z to be 0, as well as the set .Y1 of those input events .(y1 , y2 , y3 ) that cause the output z to be 1 (Figure 8.4e). Now let us attempt to use the concept of catenation to compose the EQUIVALENCE (see Figure 8.1ac) when using the normally closed valve of Figure 8.4 as hinge function. We first draw the function f , the function we wish to compose (the EQUIVALENCE of Figure 8.1a), as a mapping from .X to .Z, Figure 8.5a. The catenation process is started by drawing the mapping .h : Y → Z of the NC valve (see Figure 8.5b) which follows effortlessly from the VK-map of this hinge function (repeated in Figure 8.5d). The actual problem we face is how to find the generic function g such that .hg = f is fulfilled. Interpreting this graphically, we want to know how to draw arrows from .X to .Y. We consider this problem in Figure 8.5. For instance, note that f maps .(0, 0) ∈ X to .1 ∈ Z as pictured in sub-figure (a). In the catenation diagram of sub-figure (b), g must map .(0, 0) ∈ X to one (but, of course, only to one) input event .(y1 , y2 , y3 ) that itself is mapped to .1 ∈ Z by h. Each of these destinations, .(y1 , y2 , y3 ), is equally valid with respect to catenation. May I encourage you to extend the above considerations to each element .(x1 , x2 ) of .X in sub-figure (b), using the corresponding mappings of sub-figure (a) to decide which

Fig. 8.4 The NC-valve as hinge function

8.4 Choosing a Generic Function

93

Fig. 8.5 The NC-valve as hinge function h

elements .(y1 , y2 , y3 ) ∈ Y you may map to. In the next section we formulate the above considerations in a more formal way.

8.4 Choosing a Generic Function Assume the function we wish to design, namely .f : X → Z, to have n input variables. For a simpler notation, we write  its binary input events .(x1 , · · · , xn ) ∈ X in their decimal form x, where .x = ni=1 xi · 2n−i . Similarly, we take the hinge function .h : Y → Z to have m inputs .y1 , · · · , ym , andabbreviate its binary input m−i . events .(y1 , · · · , ym ) to their decimal form y, with .y = m i=1 yi · 2 To obtain a generic function .g : X → Y, its domain .X and co-domain .Y are partitioned thus (noting that .z ∈ Z and .Z = {0, 1}): Xz := {x|f (x) = z},

.

Yz := {y|h(y) = z},

with z ∈ {0, 1},

(8.5)

these partitions being used as domains of partial functions fz : Xz → {z},

.

hz : Yz → {z},

with z ∈ {0, 1},

(8.6)

Xz and .Yz being partitions of .X and .Y, respectively, have the basic properties:

.

X0 ∪ X1 = X

and

X0 ∩ X1 = ∅,

Y0 ∪ Y1 = Y

and

Y0 ∩ Y1 = ∅.

.

The partitions .Xz and .Yz , (8.5), allow us to define the partial generic functions .g0 and .g1 as gz : Xz → Yz

.

with z ∈ {0, 1}.

(8.7)

94

8 Design by Composition

Fig. 8.6 Continuing the example of the previous section on how to define a generic function g using VK-maps

Due to the above definitions, (8.5) to (8.7), we may write .

f0 ∪ f1 = f,

f0 ∩ f1 = ∅,

h0 ∪ h1 = h,

h0 ∩ h1 = ∅,

g0 ∪ g1 = g,

g0 ∩ g1 = ∅,

and, most importantly, .

gz : Xz → Yz ⇒ hz gz = fz ,

with

z ∈ {0, 1}

(8.8)

which states that any arbitrarily chosen function .gz from .Xz = {x|f (x) = z} to .Yz = {y|h(y) = z} — where .z ∈ {0, 1} — fulfils the catenation condition .hz gz = fz . The co-domains .Y0 and .Y1 of the partial generic functions .g0 and .g1 are determined from the hinge functions (as, e.g., shown in Figure 8.4) and are thus called ‘0 hinge-set’ and ‘1 hinge-set’, respectively. Assuming the hinge function .h : Y →  {0, 1} to have m binary inputs .(y1 , . . . , ym ) so that it is described by an m-dimensional VK-map whose cells contain either 0 or 1, • the 0 hinge-set .Y0 = {(y1 , . . . , ym )|h(y1 , . . . , ym ) = 0} consists of the binary descriptions .(y1 , . . . , ym ) of all cells containing an output value of 0; • the 1 hinge-set .Y1 {(y1 , . . . , ym )|h(y1 , . . . , ym ) = 1} consists of the binary descriptions .(y1 , . . . , ym ) of all cells containing an output value of 1. The above considerations on catenation are conveniently expressed when using VK-maps, as the continuation of our example of the previous section shows in Figure 8.6. The output values of 1 in the specified function f correspond to arbitrary elements .(y1 , y2 , y3 ) of .Y1 in equivalent cells of g; the output values of 0 in the specified function f correspond to arbitrary elements .(y1 , y2 , y3 ) of .Y0 in equivalent cells of g. Generalising on this example, we can formulate the following

8.5 Composing an Example Circuit

95

Procedure for developing the VK-map of a generic function g when given the VK-map for the specified function f , and the hinge sets Y0 and Y1 of a hinge function h :

.

(a) If a cell e of the VK-map for f contains an output value z of 0, then the equally positioned cell e of the VK-map for g contains an arbitrary element or tuple of Y0 .

(8.9)

(b) If a cell e of the VK-map for f contains an output value z of 1, then the equally positioned cell e of the VK-map for g contains an arbitrary element or tuple of Y1 .

8.5 Composing an Example Circuit Here we return to the introductory example of this chapter, Figure 8.1c, to see how the EQUIVALENCE can be developed on the basis of the NC-valve. We start with the VK-map of the EQUIVALENCE of Figure 8.7a, the function f to be synthesised, and with the hinge sets .Y0 and .Y1 of the hinge function we wish to employ (detailed in Figure 8.4e). To develop a generic function g we repeat the VK-map for f , except that, according to procedure (8.9), each inscribed 1 is replaced by an arbitrarily chosen

Fig. 8.7 Continuing the example of the previous section on how to define a generic function g using VK-maps

96

8 Design by Composition

triple .y1 , y2 , y3 of .Y1 , and each inscribed 0 is replaced by an arbitrarily chosen triple y1 , y2 , y3 of .Y0 . Our choice for these triples is shown in (b)1 . The triple-output VKmap thus obtained is split into the three VK-maps (c), (d), and (e) of the last row. Although the choice for the triples .y1 , y2 , y3 is arbitrary — within the limitations of procedure (8.9) — it is clearly not random. The triples were carefully selected from .Y0 and .Y1 to achieve the IDENTITIES .Y1 ⇔ X1 and .Y2 ⇔ X2 . I strongly recommend varying the choices for .y1 , y2 , y3 to see the effect on the result obtained. The composed circuit of Figure 8.7f is drawn by starting at the circuit’s output with the composite symbol for the hinge function. The inputs .Y1 , Y2 , Y3 are taken from the VK-maps of Figures 8.7cde, noting that .Y3 ⇔ X1 ∨ X 2 can be realised by the composite function of the NC-valve. Actually, the above example was terminated prematurely. The processing of a VK-map should continue unless the VK-map represents an IDENTITY or a constant, i.e., FALSITY or TRUTH. The VK-map of Figure 8.7e represents an IMPLICATION, so, why terminate processing? Clearly, because our composite logic function can realise the IMPLICATION. Nevertheless, let us continue the process in a formal way where it was interrupted, using the opportunity to take a closer look at how one might choose n-tuples from the hinge sets .Y0 and .Y1 . In Figure 8.8 we continue developing the function specified in the VK-map of Figure 8.7e. The composition procedure requires us to replace the 1s in the cells of (a)2 by elements, i.e., ordered triples, of .Y1 . As each triple has 1 as its last element,

.

Fig. 8.8 Discussing the choice of n-tuples from .Y0 and .Y1 1 In 2 In

this paragraph (b), (c), etc., refer to sub figures of Figure 8.7 this paragraph (a), (b1),. . . , (e) refer to the thus specified sub figures of Figure 8.8.

8.5 Composing an Example Circuit

97

we have no choice but to begin with a VK-map for the generic function as the one shown in (b1). If we could place a 1 in the third position of cell number 1 of (b1), we would be able to express .Y33 as TRUTH. The triple to be inscribed in cell 1 of (b1) must be taken from .Y0 (why?). There is only one such triple which has 1 as a last element, .(1, 0, 1), which brings us to (b2). Now having a 1 in position 1 of cell 1, it would be advantageous to also have a 1 in position 1 of cell 3. We could then describe .Y31 as an IDENTITY (i.e., .Y31 ⇔ X2 ) — assuming, of course, that we could place 0 s in the first positions of cell 0 and 2. These aspirations are expressed in (b3) where the second position of cell 3 by necessity becomes 1. This leads to the desire to describe .Y32 as an IDENTITY, i.e., as .Y32 ⇔ X1 , which necessitates us to fill the VK-map as shown in (b4). This triple-output VK-map is then split into the single-output VK-maps of the last row of Figure 8.8.

Reviewing Part I

You may have noticed that where proofs are presented, they are restricted to theorems on a finite and fixed number of variables. All such theorems can, in principle, be proved by using truth tables, a tool of propositional logic. What are not covered are proofs of theorems on an undefined number of variable. Such proofs require an extended form of logic called predicate logic, see, e.g., Vingron [2004]. Nevertheless, this book does start off with a predicate logical expression, namely with the definition of a logic variable in Eq. (1.2). It is hard to over emphasize the importance and effect the formal definition of a logic variable has on the development of switching algebra, not only allowing us directly to express an events table as a logic formula, but also to formally define min- and maxterms, (4.1) and (4.2), with which to develop the canonical normal forms, (4.3) and (4.4). Furthermore, the introduction of Karnaugh sets, (5.2), via generalised minterns, (5.1), allows us to use the Veitch-Karnaugh map in a formal manner to develop and prove tautologies, and to develop the Veitch-Karnaugh evaluation formulas, (5.19), the formulas to calculate circuits specified by VK-maps. Three further aspects should be pointed out that you will not find elsewhere in the common literature. The formulas on commutativity, associativity, and distributivity put forth are the only such formulas that exist. They were developed using computer programs (written in APL) making sure that none were missed. In general, in switching algebraic calculations make sure you abide by these laws (and don’t ‘invent’ your own). Minimising logic functions developed with canonical normal forms will always be a necessity. The method developed in chapter 7 is based on adjacency and consensus in VK-maps, developed in chapter 6, and has the advantage that one need not start with a canonical normal form. Also, it is well adapted to being implemented on a computer.

100

Reviewing Part I

Composition is a totally uncommon approach to the synthesis of combinational circuits, and has the enormous advantage of allowing us to design a circuit on the basis of any predefined combinational logic function with two or more inputs. The mathematical tool behind this method is the catenation of functions. We will encounter catenation again in the development of latches, and use it iteratively (i.e., in repeated fashion) when developing sequential circuits making catenation a principle tool of switching algebra.

Part II

Theory of Latches

Introduction to Part II

Beginning with Part II we enter the realm of such circuits that have the ability to react not only to present input events but also to a succession of input events. They are appropriately called sequential circuits. The simplest of these, those with the shortest memories, are called latches. Not surprisingly, latches are the modules that constitute the memorizing ability of all sequential circuits. A common misconception is that latches are always electric or electronic circuits with feedback. In fact, latches come in all kinds of technologies such as pneumatic (for instance in the almost forgotten field of fluidics), hydraulics or even as mechanical devises — and most of the example latches in these fields work without feedback. Thus, a theory of latches (or more correctly, of elementary memory), as put forth in chapter 9, is not and cannot be based on the concept of feedback. Nevertheless, the general, feedback-free theory of latches shows its value when it is adapted to latches with feedback — for this theory then enables us to design any latch as a feedback circuit. Feedback latches have an inherent fault: depending on the working of the feedback loop, memorization can fail. Importantly, the theory of feedback latches points to latch designs that do not have this inherent memorization failure, latches that are fail-safe. Feedback latches are covered in chapter 10. Apart from the direct design of a latch as a feedback circuit, one can design latches by composition. This method refers to using any chosen latch, and preceding it by an appropriate combinational logic to obtain a new latch function, a function one desires. This very powerful method is described in chapter 11.

Chapter 9

Basic Theory of Latches

To be able to develop an abstract concept of a latch, we need something from which to abstract. Section 1 introduces us to a number of devices and a circuit that intuitively have in common a property we call memorisation. Devices with this property are referred to as latches. Section 2 presents a formal concept of memorisation discussing it quite thoroughly. The unconventional definition was first proposed in Vingron [1979] in a slightly different form. This section, with its introduction of setting, resetting, and memorising subsets of input events, and the specification of a memory function in a reduced VK-map, poses the prime part of this chapter. In Section 3 we start the logical description of latches (or memory) by introducing inclusions and exclusions. Section 4 introduces and develops the basic memory-evaluation formulas for latches. A generalisation of these formulas is presented in Section 5. The application of both the basic memory-evaluation formulas and the generalised memory-evaluation formulas is an extensive topic, and therefore covered in its own right in the next chapter. Conversely, the present chapter contains no applications.

9.1 What is a Latch? Informally stated, a latch is the simplest binary switching device or circuit with memorising capability. Latches come in all kinds of technologies and in many functional variants. The spool valve of Figure 9.1a has a memory function: The valve retains the spool’s position caused by a previous pressure pulse on one of the pilot lines .x1 or .x2 . A different memory function is realised by the relay circuit of Figure 9.1b: Briefly closing contact .x1 activates the coil y, its state of activation being retained by the coil-activated contact, .y ∗ . The thyristor circuit of Figure 9.1c poses an example for yet a third memory realisation and function: To make the thyristor fire, we not only need a supply voltage .x1 but also a pulse on the gate

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_9

103

104

9 Basic Theory of Latches

Fig. 9.1 Examples of latches

Fig. 9.2 An events graph depicting the input-output behaviour of the sampling circuit, or D-latch

voltage .x2 ; the thyristor, then remaining open, memorises the prior presence of a gate pulse. The memorising capability of latches is usually, but incorrectly, attributed to feedback, as exists, for instance, in the relay circuit of Figure 9.1b. But, firstly, not every circuit in which feedback is incorporated has memorising capability, and, secondly, memorisation is not tied to feedback as the devices of Figures 9.1ac show. It is clearly necessary to explain memorisation independently of any technological realisation. Towards this end, consider the subsequent events graph of what is called a sampling circuit or sampling latch. As a latch, this circuit is called D-latch (i.e., data latch), and is primus interparis among the elementary latches. A sampling latch has two inputs and one output. The prime input, the data input .xD , is sampled, i.e., its value is passed on to the output, y, during usually brief time intervals which are defined by the second input .xC , the clock or control input, being 1; between sampling, i.e., while .xC is 0, the output y retains (memorises) the last sampled value. This verbal specification is satisfied by the input-output behaviour depicted in the events graph of Figure 9.2. Before continuing, it seems advisable to reread what was said in section 1.2 on the time dependent representation of binary signals in a timing diagram, and especially Section 1.3 where the concept of the events graph was introduced. Naming the latter representation a graph (and not a diagram) stresses the time independence of the signals: Neither is their duration relevant, nor is there any skew or inertia. The events graph describes only the input-output behaviour of a device or circuit, but in no way does it imply how to realise the behaviour. Next, let us gauge the information contained in the events graph of Figure 9.2.

9.2 The Memory Function

105

In drawing an events graph one tries to model the verbal specification thinking of the events graph as a well-behaved timing diagram. The location .λ = 3 shows how .xD = 1 is sampled for the first time. The output remains 1 until .xD = 0 is sampled which is the case in the location .λ = 11. As before, the output retains the zero value until .xD = 1 is sampled (which is the case at .λ = 15) when the output again becomes one. I encourage you to continue reading the graph till you have verified it. Now, consider the effect each input event .x ∈ {0, 1, 2, 3} has on the output y. Whenever .x = 3 the output .y = 1. Input events with this property are said to set the output. Whenever .x = 2 the output .y = 0. Input events with this property are said to reset the output. The input events .x = 0 and .x = 1, on the other hand, sometimes cause the output to be 1, and sometimes to be 0. Nevertheless, we can state unequivocally which output value, 0 or 1, to expect — namely, the preceding output value. An input event x, in any location .λ, which causes an output value numerically identical to the previous one (to the output value in location .λ − 1), is called a memorising input-event. A circuit or switching device that has at least one setting, one resetting, and one memorising input event — and no others — is called a latch. Please do now take the time to verify the setting, resetting, and memorising properties of the input events in Figure 9.2.

9.2 The Memory Function Latch refers to a device or circuit realised in a certain technology. The formal description of a latch, independent of any technology, is called a memory or memory function. In this section we concern ourselves with this formal aspect. A memorising input event x of a latch allows the output .yx to be (either) 0 or 1, which we can state as .yx ∈ {0, 1}. To coherently express all output values .yx (whether caused by setting, resetting, or memorising input events) as elements of a set, we use the following Definition. A memory is a surjection   f : E → {0}, {1}, {0, 1} .

.

Its output, yx , is explained as an element of the value of the function f (x), i.e., yx ∈ f (x) with x ∈ E. The output is wellbehaved meaning that   f x(λ) = {0, 1} ⇒ yx(λ) = yx(λ−1)

(9.1)

where λ, called a location, is an element of the infinite index set L = {1, 2, . . . }. The memory definition (9.1) was inspired by Eilenberg and Elgot’s [1970] definition of a relation, and first put forth in Vingron [1979] where a time parameter was used instead of an index set. We now discuss this definition in detail.

106

9 Basic Theory of Latches

A surjection is a function such that each element of the co-domain is a value of the function, i.e., is an image. The domain .E ={0, 1, 2, . . . , 2n − 1} is, in our n n−i of the input events context, the set of the decimal equivalents .x = 1 xi · 2 .(x1 , x2 , . . . , xn ). For a combinational circuit the value of the function is the circuit’s output: Not so for a memory function f where the images .f (x) are sets of output values so that the output .yx itself is an element of these sets, .yx ∈ f (x). The output values caused by setting or resetting input events are per definition well-behaved. The output value .yx(λ) initiated by a memorising input event .x(λ) — one that is mapped to .{0, 1} — might switch spontaneously, or even oscillate between 0 and 1 if it were not well-behaved1 . It is valid to split an input event .x(λ), occupying a location  .λ of the index  set .L, into an ordered pair of successive and equal input events . x(λ−1), x(λ) . In all such cases the circuit, being well-behaved, requires the output to be invariant, effectively spontaneous switching.  inhibiting  The principle of being well-behaved, .f x(λ) = {0, 1} ⇒ yx(λ) = yx(λ−1) , does not specify whether the input event .x(λ − 1) which initiates the prior output .yx(λ−1) equals .x(λ) or not. If they differ, and if .x(λ) maps to .{0, 1} then the present output value .yx(λ) retains the previous value .yx(λ−1) . Thus the principle of being wellbehaved not only safe-guards the output of a memory function against spontaneous changes of the output value, it also ensures memorisation whenever an input event maps to .{0, 1}.

Specifying a Memory Function in a VK-map As a memory function f maps the input events .x ∈ E to one of the sets .{0}, .{1}, or .{0, 1}, it would seem that a VK-map whose every cell contains one of these sets would be an appropriate representation — see Figure 9.3a. But such a representation does not  take into account that the memory function must be well-behaved, meaning that .f x(λ) = {0, 1} ⇒ yx(λ) = yx(λ−1) . The correct representation, therefore, is to use a VK-map the cells of which contain the output .yx ∈ f (x), and not the function’s value .f (x) itself — see Figure 9.3b. The thus obtained VK-map is a reduced VK-map (see Section 5.5) and as such can be drawn in its standard form shown in Figure 9.3c. A memory function f partitions its domain .E into a setting, resetting, and memorising equivalent, .E1 , .E0 , and .Ey , respectively, whereby the memorising equivalent .Ey is explicitly defined via well-behavedness:

1 A memory function is so very different to a conventional function because its output is not the image of the function. The image of a memory function is one of the sets .{0}, .{1}, .{0, 1} while the output of the memory function is an element of one of these sets. This only causes a problem when an input event x is present that is mapped to .{0, 1}. Of course, y being an element of .{0, 1} it must be either 0 or 1. Yet, without further information it is impossible to say whether .y = 0 or .y = 1. In fact, the value of y could switch spontaneously between the values 0 and 1 without violating the condition .y ∈ {0, 1}.

9.2 The Memory Function

107

Fig. 9.3 Expressing the input-output behaviour of the sampling circuit of Figure 9.2 in a VK-map

Fig. 9.4 Reading the reduced VK-map of the sampling circuit according to the chosen sequence of input events stated in the events graph

.

    E1 := x(λ)|f x(λ) = {1} ,     E0 := x(λ)|f x(λ) = {0} ,      Ey := x(λ)| f (x(λ)) = {0, 1} → yx(λ) = yx(λ−1) .

(9.2)

These are easy to visualise in the reduced VK-map of a memory function. Those cells containing 1s comprise the set of setting input events. Those containing 0s comprise the set of resetting cells. And those containing the symbol for the previous output, .yx(λ−1) , comprise the set of memorising input events.

Reading the Reduced VK-map of a Memory Function The technical term reading (the reduced VK-map of a memory function) refers to the process of developing an events graph for the memory function from its reduced VK-map. Take, for example, the reduced VK-map of the sampling circuit repeated in Figure 9.4. To develop the events graph, one chooses a sequence of input events. We choose the same sequence of input events as in Figure 9.2, which for your convenience is also repeated in Figure 9.4. Now, as we proceed with the reading process, I encourage you to draw the output values into the events graph of Figure 9.4 as they are obtained.

108

9 Basic Theory of Latches

In the initial location (.λ = 1) we take the input event x to be zero (.x = 0). According to the VK-map we have started with a memorising event; not knowing the preceding output value (.yx(λ−1) ) we take the liberty of assuming it to be 0 so that the present value must also be 0 — and this value is entered for y in location 1 of the events graph. In the second location (.λ = 2) the input event is one (.x = 1) which, according to the VK-map, is again a memorising one. But this time we know the output of the previous input event (the previous output) to be 0 so that the present output must also be 0. The input event .x = 3 at .λ = 3 is a setting input event (see the VK-map) so .y = 1 is entered into the events graph in location 3. Presumably these few explanations are sufficient to understand the reading procedure. Just this once, please do take the time to complete the events graph for this example. If you have any doubts, note that you must come up with the events graph of Figure 9.2. Before leaving this section, consider the required infinity of the index set .L, the set of locations .λ. The definition of memory implies that whenever an input event occurs in a succession of input events of an events graph, and be the sequence infinitely long, each input event maintains its membership to one and the same subset .E1 , .E0 , .Ey , and never will an input event occur which does not belong to one of the setting, resetting, or memorising subsets. Thus, when adding a further input event to an events graph of a memory function, the required infinity of the index set .L ensures that the input event added remains setting if its prior occurrences were setting, resetting if they were resetting, and memorising if they were memorising.

9.3 Introducing Inclusions and Exclusions The evaluation formulas for memory functions are somewhat more complicated than those for combinational circuits, (5.19). Introducing special logic functions, or propositional forms, allows us to formulate the evaluation formulas for memory functions more succinctly. These special logic functions are called inclusions, .U (x), and exclusions, .V (x), and are defined as the abbreviations:

.

U1 (x(λ)) :⇔ x(λ) ∈ E1 ,

V1 (x(λ)) :⇔ x(λ) ∈ E1 ,

U0 (x(λ)) :⇔ x(λ) ∈ E0 ,

V0 (x(λ)) :⇔ x(λ) ∈ E0 ,

Uy (x(λ)) :⇔ x(λ) ∈ Ey ,

Vy (x(λ)) :⇔ x(λ) ∈ Ey .

(9.3)

As set forth in Section 5.4 (at least for .x ∈ E1 ), the right sides of the above expressions can be calculated as shown in the right sides of the following formulas. In these (and subsequently), the arguments .x(λ) of .U (x(λ)) and .V (x(λ)) are omitted for space reasons and for better readability.

9.3 Introducing Inclusions and Exclusions

U1 ⇔



Cϕ (x(λ)),

109

ϕ:Kϕ ∈K(E1 )

.

U0 ⇔



Cϕ (x(λ)),



Dϕ (x(λ)),

ϕ:Kϕ ∈K(E1 )



V0 ⇔

ϕ:Kϕ ∈K(E0 )

Uy ⇔



V1 ⇔

Dϕ (x(λ)),

ϕ:Kϕ ∈K(E0 )

Cϕ (x(λ)),



Vy ⇔

ϕ:Kϕ ∈K(Ey )

(9.4)

Dϕ (x(λ)).

ϕ:Kϕ ∈K(Ey )

Remember that the cover .K(E1 ) is a set of arbitrarily chosen K-sets whose union is .E1 , e.g., .K(E1 ) = {Kϕ (1) , · · · , Kϕ (L) } with .E1 = Kϕ (1) ∪ · · · ∪ Kϕ (L) , and that .ϕ : Kϕ ∈ K(E1 ) is read: those .ϕ for which .Kϕ ∈ K(E1 ). Of course, you will usually be using the largest K-sets, i.e., prime sets, not just any K-sets, to obtain simple logic expression. When working with inclusions, U , and exclusions, V , you will frequently want to fall back on the following summary of their interrelations: U1 ∨ U0 ∨ Uy ⇔ 1,

.

V1 V0 Vy ⇔ 0,

U1 ∨ U0 ⇔ Vy ,

V1 V0 ⇔ Uy ,

U1 ∨ Uy ⇔ V0 ,

V1 Vy ⇔ U0 ,

U0 ∨ Uy ⇔ V1 ,

V0 Vy ⇔ U1 .

U1 U0 ⇔ 0,

V1 ∨ V0 ⇔ 1,

U1 Uy ⇔ 0,

V1 ∨ Vy ⇔ 1,

U0 Uy ⇔ 0,

V0 ∨ Vy ⇔ 1.

(9.5)

Proving these formulas is simple. First consider .U1 ∨ U0 ∨ Uy ⇔ 1. As .E1 ∪ E0 ∪ Ey = E we can write .(x ∈ E1 ) ∨ (x ∈ E0 ) ∨ (x ∈ Ey ) ⇔ (x ∈ E) which proves .U1 ∨ U0 ∨ Uy ⇔ 1 because .x ∈ E ⇔ 1. To prove .U1 ∨ U0 ⇔ Vy we reason as follows: E1 ∪ E0 = E \ Ey ,

.

x ∈ (E1 ∪ E0 ) ⇔ x ∈ (E \ Ey ) (x ∈ E1 ) ∨ (x ∈ E0 ) ⇔ (x ∈ E) ∧ (x ∈ Ey ) U1 ∨ U0 ⇔ 1 ∧ Vy U1 ∨ U0 ⇔ Vy Finally, .U1 U0 ⇔ 0 follows directly from .E1 and .E0 being mutually disjoint.

110

9 Basic Theory of Latches

9.4 Basic Memory Evaluation-Formulas An evaluation formula states when the output of a switching function is 1, and when it is 0. Expressing this for a memory function is somewhat more intricate than for a combinational function (see (5.19)). To simplify the arguments, let us only ask when a memory’s output is 1. Referring to the partitions .E1 , .E0 , and .Ey of (9.2) we note: The output .y(λ) is obviously 1 in the presence of a setting input event .x ∈ E1 . But .y(λ) can also be 1 when a memorising input event .x ∈ Ey is present, assuming that the prior output value, .y(λ − 1), was 1. Expressing these two cases formally we get: Case 1: A setting input    event is present — .x ∈ E1 . The setting input event .E1 := x(λ)|f x(λ) = {1} can equivalently be expressed as   x(λ) ∈ E1 ⇔ f x(λ) = {1}.

.

  As the output .yx(λ) is an element of .f x(λ) , the latter in our case being the   oneelement set .{1}, the output .yx(λ) itself is 1 allowing us to substitute .f x(λ) = {1} by .yx(λ) = 1 or,  better still, by the logic variable .Yx(λ) . We can now rewrite .x(λ) ∈ E1 ⇔ f x(λ) = {1} as   U1 x(λ) ⇔ Yx(λ) .

.

(9.6)

Case 2: A memorising input event is present — .x ∈ Ey . The definition of the memorising equivalent, .Ey of (9.2), can be expressed as   x(λ) ∈ Ey ⇔ (f x(λ) = {0, 1}) → (yx(λ) = yx(λ−1) ).

.

(*)

we transform each of the propositional forms .x(λ) ∈ Ey ,  Successively,  f x(λ) = {0, 1}, and .yx(λ) = yx(λ−1) into logical expression. The first is trivial:

.

x(λ) ∈ Ey ⇔ Uy .

.

  The output .yx(λ) being an element of .f x(λ) , which itself equals .{0, 1}, allows us to deduce as follows:   f x(λ) = {0, 1} ⇔ yx(λ) ∈ {0, 1} ⇔ yx(λ) = 0 ∨ yx(λ) = 1 ⇔ Y x(λ) ∨ Yx(λ) ⇔ 1.

.

We transform the third propositional form, .yx(λ) = yx(λ−1) , by using the logic variables .Yx(λ) ⇔ yx(λ) = 1 and .Yx(λ−1) ⇔ yx(λ−1) = 1. Using a truth table, it is easy to prove yx(λ) = yx(λ−1) ⇔ Yx(λ) ↔ Yx(λ−1) .

.

9.4 Basic Memory Evaluation-Formulas

111

The above logical form, (*), of the memorising equivalent can now be rewritten Uy ⇔ 1 → (Yx(λ) ↔ Yx(λ−1) ),

.

Uy ⇔ Yx(λ) ↔ Yx(λ−1) , Uy ⇔ Yx(λ) Yx(λ−1) ∨ Y x(λ) Y x(λ−1) . ANDing each side of the above formula with .Yx(λ−1) we obtain Uy Yx(λ−1) ⇔ Yx(λ) Yx(λ−1) Yx(λ−1) ∨ Y x(λ) Y x(λ−1) Yx(λ−1) ,





.

Yx(λ−1)

0

Uy Yx(λ−1) ⇔ Yx(λ) Yx(λ−1) ∨ 0, Uy Yx(λ−1) ⇔ Yx(λ) Yx(λ−1) .

(9.7)

This is as far as Case 2 takes us, for unlike conventional mathematics, in mathematical logic we are not allowed to cancel .Yx(λ−1) in (9.7). Combining cases 1 and 2 to obtain the evaluation formula. We obtain the value of the output variable .Yx(λ) by combining the (9.6) and (9.7) via OR, i.e., ORing their left sides and right sides separately: Yx(λ) ⇔ U1.

(9.6. . . )

.

Yx(λ) Yx(λ−1) ⇔ Uy Yx(λ−1)

(9.7. . . )

Yx(λ) ∨ Yx(λ) Yx(λ−1) ⇔ U1 ∨ Uy Yx(λ−1) Yx(λ) (1 ∨ Yx(λ−1) ) ⇔ U1 ∨ Uy Yx(λ−1) Yx(λ) ⇔ U1 ∨ Uy Yx(λ−1) This is the first of the memory evaluation formulas. From it we can develop the others with the help of the inclusion and exclusion formulas summarised in (9.5). We start by developing the exclusion form: Yx(λ) ⇔ U1 ∨ Uy Yx(λ−1)

.

⇔ V0 Vy ∨ V1 V0 Yx(λ−1)   ⇔ V0 Vy ∨ V1 Yx(λ−1)    ⇔ V0 Vy ∨ V1 Vy ∨ Yx(λ−1) 

1

 Yx(λ) ⇔ V0 Vy ∨ Yx(λ−1) .

112

9 Basic Theory of Latches

Lastly, we invert the obtained evaluation formulas: Y x(λ) ⇔ U1 ∨ Uy Yx(λ−1) ,   ⇔ U 1 ∧ Uy Yx(λ−1) ,   ⇔ U 1 U y ∨ Y x(λ−1) ,   Y x(λ) ⇔ V1 Vy ∨ Y x(λ−1) .

.

  Y x(λ) ⇔ V0 Vy ∨ Yx(λ−1)   ⇔ V 0 ∨ Vy ∨ Yx(λ−1)   ⇔ V 0 ∨ V y ∧ Y x(λ−1) Y x(λ) ⇔ U0 ∨ Uy Y x(λ−1) .

These so-called basic memory evaluation formulas are now gathered:

.

  Yx(λ) ⇔ U1 ∨ Uy Yx(λ−1) ⇔ V0 Vy ∨ Yx(λ−1) ,   Y x(λ) ⇔ U0 ∨ Uy Y x(λ−1) ⇔ V1 Vy ∨ Y x(λ−1) .

(9.8)

9.5 Generalised Memory Evaluation-Formulas In this section we put forth a generalisation of memory evaluation-formulas, again leaving their application for the next chapter where their worth is demonstrated. The generalised memory evaluation formulas stating when the output is 1 are:

.

    Yx(λ) ⇔ U1 ∨ U1S ∨ Uy Yx(λ−1) ⇔ V0 V0S Vy ∨ Yx(λ−1) ,     Y x(λ) ⇔ U0 ∨ U0S ∨ Uy Y x(λ−1) ⇔ V1 V1S Vy ∨ Y x(λ−1) .

(9.9)

These formulas use arbitrary subsets, .E1S and .E0S , of the setting and resetting equivalents: ∅ ⊆ E1S ⊆ E1 ,

.

∅ ⊆ E0S ⊆ E0 ,

(9.10)

their respective inclusions and exclusions being written as U1S :⇔ x ∈ E1S ,

U0S :⇔ x ∈ E0S ..

(9.11a)

V1S :⇔ x ∈ / E1S ,

V0S :⇔ x ∈ / E0S .

(9.11b)

.

Most certainly you will have no difficulty in proving the following relationships:

9.5 Generalised Memory Evaluation-Formulas

113

U1 ∨ U1S ⇔ U1 ,

V1 V1S ⇔ V1 ,

U0 ∨ U0S ⇔ U0 ,

V0 V0S ⇔ V0 ,

.

U1 U1S ⇔ U1S ,

V1 ∨ V1S ⇔ V1S ,

U0 U0S ⇔ U0S ,

V0 ∨ V0S ⇔ V0S .

(9.12)

  Using these formulas we derive .Yx(λ) ⇔ U1 ∨ U1S ∨ Uy Yx(λ−1) , (9.9), from .Yx(λ) ⇔ U1 ∨ Uy Yx(λ−1) , (9.8), in the following way: Yx(λ) ⇔ U1 ∨ Uy Yx(λ−1)

.

⇔ U1 ∨ U1S ∨ Uy Yx(λ−1) ⇔ U1 ∨ U1S 1 ∨ Uy Yx(λ−1)   ⇔ U1 ∨ U1S 1 ∨ Yx(λ−1) ∨ Uy Yx(λ−1)

Yx(λ)

⇔ U1 ∨ U1S ∨ U1S Yx(λ−1) ∨ Uy Yx(λ−1)   ⇔ U1 ∨ U1S ∨ Uy Yx(λ−1)

The next chapter is entirely devoted to the practical application of the memory evaluation formulas, (9.9), in developing feedback latches. The methods developed there, and the feedback latches themselves, are the backbone to designing asynchronous sequential circuits as Part III shows.

Chapter 10

Designing Feedback Latches

The memory evaluation-formulas of the previous chapter are the backbone to designing feedback latches (such as the relay circuit of Figure 9.1b). Yet, feedback can only be realised if we can show that the prior output .Yx(λ−1) may always (i.e., for all input events) be chosen to equal the present output .Yx(λ) — for then and only then can we connect the two leads, thus realising feedback. The necessary analysis is carried out in Section 10.1 leading to modified evaluation formulas adapted to designing feedback latches. Section 10.2 pinpoints a sensitive topic in the input-output behaviour of feedback latches — their susceptibility to socalled memorisation hazards. Contrary to combinational circuits, an otherwise briefly erroneous output value (a hazard) can be perpetuated causing the latch to malfunction. A crude and not very effective way to reduce the likelihood of a memorisation hazard is to introduce a delay into the feedback as discussed in Section 10.3. On the other hand, there are two methods to correctly prohibit the occurrence of a memorisation hazard thus obtaining so-called fail-safe latches. The first method uses what one could call a pre-established feedback loop and is discussed in Section 10.4. The second method, only touched on in Section 10.5, is based on the Eccles-Jordan latch. The theory behind this method is based on latch composition which is put forth in the next chapter, Chapter 11.

10.1 Feedback Evaluation-Formulas We use a basic memory evaluation-formula to determine the dependence of the output .Yx(λ) in a given location .λ on the output .Yx(λ−1) of the previous location .λ − 1. The formula of our choice is .Yx(λ) ⇔ U1 ∨ Uy Yx(λ−1) , (9.8). As a memory function partitions its domain .E into the mutually disjoint and non-empty subsets .E1 , .E0 , and .Ey , (9.2), we must consider the above memory evaluation-formula for the following three cases:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_10

115

116

10 Designing Feedback Latches

case 1: Take .x(λ) to be a setting input-event so that .U1 ⇔ x(λ) ∈ E1 ⇔ 1, .U0 ⇔ x(λ) ∈ E0 ⇔ 0, and .Uy ⇔ x(λ) ∈ Ey ⇔ 0. Then the memory’s output becomes     .Yx(λ) ⇔ U1 ∨ Uy ∧ Yx(λ−1) ⇔ 1 ∨ 0 ∧ Yx(λ−1) ⇔ 1 ∨ 0 ⇔ 1. case 2: Take .x(λ) to be a resetting input-event so that .U1 ⇔ x(λ) ∈ E1 ⇔ 0, .U0 ⇔ x(λ) ∈ E0 ⇔ 1, and .Uy ⇔ x(λ) ∈ Ey ⇔ 0. Then the memory’s output becomes     .Yx(λ) ⇔ U1 ∨ Uy ∧ Yx(λ−1) ⇔ 0 ∨ 0 ∧ Yx(λ−1) ⇔ 0 ∨ 0 ⇔ 0. case 3: Finally, take .x(λ) to be a memorising input-event so that .U1 ⇔ x(λ) ∈ E1 ⇔ 0, .U0 ⇔ x(λ) ∈ E0 ⇔ 0, and .Uy ⇔ x(λ) ∈ Ey ⇔ 1. Then the memory’s output becomes    .Yx(λ) ⇔ U1 ∨ Uy ∧ Yx(λ−1) ⇔ 0 ∨ 1 ∧ Yx(λ−1) ⇔ 0 ∨ Yx(λ−1) ⇔ Yx(λ−1) . In the first two cases, when .x(λ) is either a setting or resetting input-event, the output .Yx(λ) in no way depends on .Yx(λ−1) , while .Yx(λ) and .Yx(λ−1) are equivalent, .Yx(λ) ⇔ Yx(λ−1) , when .x(λ) is a memorising input-event. We may therefore require the equivalence between .Yx(λ) and .Yx(λ−1) for all .λ of the infinite index set .L of locations without contradicting cases 1 and 2: .

Yx(λ) ⇔ Yx(λ−1)

(10.1)

We call this the postulate of feedback. Using this postulate, we substitute .Yx(λ) for Yx(λ−1) in .Yx(λ) ⇔ U1 ∨ Uy Yx(λ−1) obtaining (in unabridged notation)

.

Yx(λ) ⇔ U1 (x(λ)) ∨ Uy (x(λ))Yx(λ) .

.

(10.2)

For better readability we revert to the abridged notation, used so far, where the argument .x(λ) is dropped. The memory evaluation formulas then take on the following form for the feedback latches:

.

  Y ⇔ U1 ∨ Uy Y ⇔ V0 Vy ∨ Y ,   Y ⇔ U0 ∨ Uy Y ⇔ V1 Vy ∨ Y .

(10.3)

We call these formulas the basic feedback evaluation-formulas. A similar analysis as above, of the generalised memory evaluation-formulas, (9.9), enables us to develop generalised feedback evaluation-formulas. The analysis necessary requires us to subdivide case 1: case 1a: Take .x(λ) to be a setting input-event which also belongs to an arbitrarily chosen subset .∅ ⊆ E1S ⊆ E1 so that .U1 ⇔ x(λ) ∈ E1 ⇔ 1, .U1S ⇔ x(λ) ∈ E1S ⇔ 1, .U0 ⇔ x(λ) ∈ E0 ⇔ 0, and .Uy ⇔ x(λ) ∈ Ey ⇔ 0. Then the memory’s output becomes    .Yx(λ) ⇔ U1 ∨ U1S ∨ Uy Yx(λ−1) ⇔ 1 ∨ 1 ∨ 0 Yx(λ−1) ⇔ 1 ∨ Yx(λ−1) ⇔ 1. case 1b: Take .x(λ) to be a setting input-event, but assume it not to belong to an arbitrarily chosen subset .∅ ⊆ E1S ⊆ E1 so that .U1 ⇔ x(λ) ∈ E1 ⇔ 1,

10.1 Feedback Evaluation-Formulas

117

U1S ⇔ x(λ) ∈ E1S ⇔ 0, .U0 ⇔ x(λ) ∈ E0 ⇔ 0, and .Uy ⇔ x(λ) ∈ Ey ⇔ 0. Then the memory’s output becomes    .Yx(λ) ⇔ U1 ∨ U1S ∨ Uy Yx(λ−1) ⇔ 1 ∨ 0 ∨ 0 Yx(λ−1) ⇔ 1 ∨ 0 ⇔ 1. .

Now following the above reasoning, we get the generalised feedback evaluation-formulas:

.

    Y ⇔ U1 ∨ U1S ∨ Uy Y ⇔ V0 V0S Vy ∨ Y ,     Y ⇔ U0 ∨ U0S ∨ Uy Y ⇔ V1 V1S Vy ∨ Y .

(10.4)

Feedback circuits are  according to either the inclusive (evaluation)  designed formula .Y ⇔ U1 ∨ U1S ∨ Uy Y , or the exclusive (evaluation) formula .Y ⇔   V0 V0S Vy ∨ Y , (10.4). In the former instance we need to choose an arbitrary (possibly empty) subset .E1S of the setting input-events .x ∈ E1 , while in the latter case we must choose a subset .E0S ⊆ E0 of resetting input-events. The structure of the feedback circuits obtained is shown in Figure 10.1 — a structure that holds for all feedback latches with two or more inputs. The following simple example of Figure 10.2 demonstrates the usage of the evaluation formulas .Y ⇔ U1 ∨ U1S ∨ Uy Y when we choose the subsets .E1S to be empty (in which case .U1S ⇔ 0), i.e., we use the formula in its so-called basic form .Y ⇔ U1 ∨ Uy Y . The design procedure is shown in quite some detail in the figure. Here, as in all further design applications of latches, we start with the latch specified in a reduced VK-map. The logic circuit derived is that of the TTL D-latch 74SL75. It is equally simple to design the logic circuit in its exclusive form, as Figure 10.3 shows. The resetting exclusion, .V0 , is obtained from the reduced VK-map as .C ∨ D,while the is C. The  basic exclusive formula, .Q ⇔  memorising exclusion  .Vq  V0 Vq ∨ Q then becomes .Q ⇔ C ∨ D C ∨ Q . This formula leads to the circuit of Figure 10.3c.

Fig. 10.1 The general design of feedback lathes according to (10.4)

118

10 Designing Feedback Latches

Fig. 10.2 Designing the D-latch in its basic disjunctive form

Fig. 10.3 Designing the D-latch in its basic conjunctive form

10.2 Memorisation Hazards Hazards in combinational circuits have been studied quite extensively, e.g., Vingron [2004]. A hazard (as a technical term as used here) is the occurrence of a short-lived and unwanted interruption of an otherwise steady-state output signal. A 1-hazard refers to a 1-signal being briefly interrupted (i.e., briefly dropping to 0), and it can occur when one setting input event follows another. A 0-hazard implies a brief 1-spike in a signal that is otherwise 0, and can occur when one resetting input event follows another. The output of a combinational circuit is, in the last consequence, i.e., despite a possibly brief interruption, always enforced by the input event: meaning, a setting input event will ultimately force the output to become 1, while a resetting input event will ultimately force the output to become 0. In a latch, on the other hand, a memorising input event does not have the ability to enforce a certain output value — a memorising input event can only try to memorise the output value caused by the previous input event (be it a setting, resetting, or some other memorising input event). If memorisation fails, the latch is said to malfunction, meaning that not the previous output value is memorised, but, rather, its inverse. This is quite unacceptable because, in this case the false

10.2 Memorisation Hazards

119

Fig. 10.4 Visualising the proper functioning of a basic disjunctive latch (top row), and the malfunctioning of the latch (bottom row), when trying to memorise a 1-signal.

output signal is not short-lived, it is permanent. The occurrence of an incorrectly memorised prior output signal (or even the danger thereof) is called a memorisation hazard. In the remainder of this section we look at the most hazard prone latches, those designed according to the basic evaluation formulas .Y ⇔ U1 ∨ Uy Y and   .Y ⇔ V0 Vy ∨ Y of (10.3) to detail how memorisation hazards occur.

Memorisation Hazards in Disjunctive Latches The logic circuit in Figure 10.4 is a graphical representation of .Y ⇔ U1 ∨ Uq Y , (10.3). Figure (a) shows the signal flow in the presence of a setting input-event (in which case .U1 ⇔ 1, .U0 ⇔ 0, and .Uq ⇔ 0). Note that the feedback signal is interrupted at the AND-gate. For the latch to work properly — see Figure (b) — when switching to a memorising input-event (.U1 ⇔ 0, .U0 ⇔ 0, and .Uq ⇔ 1) we must assume that the feedback signal remains 1 although the .U1 -input to the OR (which causes the feedback signal to be 1) has dropped to 0. Both inputs to the AND now being 1, the AND fires, thereby sustaining the 1-signal in the feedback loop, see Figure (c).

120

10 Designing Feedback Latches

Fig. 10.5 A basic disjunctive latch never malfunctions when memorising a 0-output

A more realistic scenario — see Figure (d) — is to assume that the 1-signal in the feedback will start breaking down once .U1 becomes 0 this making it uncertain if the energy supplied by a weakening 1-output of the AND-gate suffices to sustain or rebuild a 1-signal in the feedback loop. In this case it is quite likely that the feedback loop will break down so that the output erroneously becomes 0. In the following sections we concentrate on how to avoid this malfunctioning. Having seen the strong likelihood of the basic disjunctive latch malfunctioning when a 1-output is to be memorised, it is notable that memorising a 0-output never fails, i.e., there is no danger of the latch malfunctioning in this case. This is visualised and explained in Figure 10.5, and I strongly recommend pondering the figure.

Memorisation Hazards in Conjunctive Latches The latch of Figure 10.6 which is designed according to .Y ⇔  basic conjunctive  V0 Vy ∨ Y , (10.3), is equally important. It never malfunctions when a 1-output is to be memorised. This is pictured and explained in Figure 10.6, a figure I strongly encourage you to ponder. On the other hand, the basic conjunctive latch shows its weakness when trying to memorise a 0-output. The top row of Figure 10.7 shows and explains the signal flow that must prevail for memorisation to be successful. The bottom row explains and pictures the signal flow that leads to the circuit malfunctioning. Again, I leave it to you to ponder this figure. Designing feedback latches so that memorisation hazards are avoided as far as possible is the prime design objective, the reason being that if an error occurs it is not transient — it is permanent. Achieving these design goals is the topic of the next two sections.

10.3 Delayed Feedback

121

Fig. 10.6 A basic conjunctive latch never malfunctions when memorising a 1-output

Fig. 10.7 Visualising the proper functioning of a basic conjunctive latch (top row), and the malfunctioning of the latch (bottom row), when trying to memorise a 0-signal.

10.3 Delayed Feedback The switching device that gave rise to switching circuitry was the electro-magnetic relay developed by the Scottish-American scientist Josef Henry in 1835. Intrinsic to these devices is the inertia and delay with which the relay’s contacts react to a

122

10 Designing Feedback Latches

Fig. 10.8 Input-output behaviour of the ideal inertial-delay

Fig. 10.9 Studying the effect of an inertial delay in the feedback of the basic disjunctive form of the D-latch

change in the coil’s state of energisation. This inertia and delay was — and mostly still is — taken to be an essential part of every feedback circuit, and is modelled by installing an ideal inertial delay in the feedback. The behaviour of the ideal inertial delay is pictured in Figure 10.8. The rising and falling edges of an input signal are delayed equally long by the constant delay-time .t. Inertial refers to pulses and troughs shorter than the delay time .t having no effect on the output. In this respect you can view a delay element as an energy buffer compensating brief and/or runt salients. To see the effect of an inertial delay we first use the basic disjunctive form of the D-latch designed in Figure 10.2 and shown in Figure 10.9 with an inertial delay in the feedback. As basic disjunctive latches notoriously have difficulties memorising a 1-output, we examine this case in forcing a 1-output by setting the latch. Figure (a) shows the signal flow for the setting input-event .C ⇔ 1 and .D ⇔ 1, which specifies cell 3 in the VK-map of Figure 10.2a. Note that the 1-signal in the feedback loop is interrupted between the loop’s AND-gate and the OR-gate — just as is the case for the setting condition of the general circuit of Figure 10.4a. Switching to the

10.3 Delayed Feedback

123

Fig. 10.10 Studying the effect of an inertial delay in the feedback of the basic conjunctive form of the D-latch

memorising input-event .C ⇔ 0 and .D ⇔ 1 forces the output of the AND .(D∧C) to 0 thereby depriving the output-OR of its setting input signal. To compensate for this, the inertial delay must send its 1-signal, .Q∗ , through the now open AND, .(C ∧ Q∗ ), of the feedback loop, through the output-OR, and back to the delay — all hopefully fast enough to make the feedback loop self-sustaining. In real-world circuits this can, to a varying degree, lead to a time-dependent depletion of the output signal as schematically pictured in Figure 10.9c, showing that a delay in the feedback loop does not guarantee the proper functioning of a feedback latch. As we already know from the prior section, basic conjunctive latches are prone to malfunction when trying to memorise a 0-output. For the basic D-latch this is laid out in Figure 10.10 which, before continuing, you might want to ponder. The more energy stored in the delay of the feedback, and the longer the delay is able to dissipate this energy into the feedback loop, the smaller the time-dependent deviation of the value of the latch’s output signal from its correctly memorised value. On the other hand, this has an adverse influence on the frequency with which input events may change while still causing correct output values (these are so-called slow latches). As you can see, employing a delay in the feedback to ensure the proper functioning of a feedback latch is not very effective. It is crude in that it has no theoretical basis and is a purely intuitive attempt to solve the problem of a memorisation hazard. The best possible approach, that in fact works without any delay in the feedback, is shown in the next section.

124

10 Designing Feedback Latches

10.4 Fail-Safe Latch Design Inserting delays into the feedback of latches is counter productive: (a) The delays are not able to prohibit the occurrence of memorisation hazards, and (b) the designed latches are slow in the sense that the frequency with which they can be operated is restricted. The only way to correctly design a latch is to develop it so that the whole feedback loop is pre-established when the latch is either set or reset. This means, if the latch is set, the 1-output signal of the latch must be present in the whole feedback loop without any interruption; secondly, and also, if the latch is reset, the 0-output signal of the latch must be present in the whole feedback loop without any interruption. Then, when switching to a memorising input event, no change in the signal flow within the feedback loop occurs, completely avoiding any memorisation hazard: The latch is fail-safe. These fail-safe latches are developed using the following equations which are a simple expansion of (10.4) and can be realised by inserting a single OR or AND gate, as shown in Figure 10.11, into the general latch design of Figure 10.1. Let me stress the point that the thus developed latches are fail-safe by pointing to their pre-established feedback loops.

.

       Y ⇔ U1 ∨ U1 ∨ U1S ∨ Uy Y ⇔ V0 V0 ∧ V0S Vy ∨ Y ,        Y ⇔ U0 ∨ U0 ∨ U0S ∨ Uy Y ⇔ V1 V1 ∧ V1S Vy ∨ Y .

(10.5)

Fig. 10.11 The pre-established feedback ensures that a 1-output of the disjunctive D-latch is stored faultlessly

10.4 Fail-Safe Latch Design

125

Fig. 10.12 The pre-established 1-feedback ensures that a 1-output of the disjunctive D-latch is stored faultlessly

Consider disjunctive latches. No disjunctive latch malfunctions when memorising a 0-output as discussed for the basic disjunctive latch in Figure 10.5. In addition to this, the inserted OR-gate in Figure 10.11a ensures that any setting input (making .U1 ⇔ 1) pre-establishes the feedback loop, i.e., makes the 1-signal of the output present in the whole feedback loop without any interruption. A dual statement holds for the conjunctive latch. No conjunctive latch malfunctions when memorising a 1output as discussed for the basic conjunctive latch in Figure 10.6. In addition to this, the inserted AND-gate in Figure 10.11b ensures that any resetting input (making .V0 ⇔ 0) pre-establishes the feedback loop, i.e., makes the 0-signal of the output present in the whole feedback loop without any interruption. It is quite informative to revisit the D-latches developed and analysed in Figure 10.9 and Figure 10.10 where we saw that inserting delays into the feedback is not very effective in prohibiting memorisation hazards. Again we start with the disjunctive D-latch following the steps of its development and its analysis in Figure 10.12. The first formula in figure (a) is the already well known basic form, and it is this form that is expanded to the fail-safe formula represented in figures (b) and (c). Note that the signal flow in the feedback loop is guaranteed to be uninterrupted when switching from the setting to a memorising input event due to the unfailing 1-output of the inserted OR-gate. Thus the 1-output remains unwaveringly 1. The so-called minimised formula (derived by using .CD ∨ C ⇔ D ∨ C) of course also leads to a fail-safe latch design. The conjunctive latches of

126

10 Designing Feedback Latches

Fig. 10.13 The pre-established 0-feedback ensures that a 0-output of the conjunctive D-latch is stored faultlessly

Figure 10.13 depict the minimised form of the latch equations visualising that this latch is indeed fail-safe. The approach taken here to designing fail-safe latches has one possible drawback: it provides us with a single-output latch. If we want a fail-safe latch with two mutually inverted outputs Q and .Q, we need to take a different approach, an approach merely touched on in the next section but expounded in the next chapter.

10.5 Fail-Safe Latches with Mutually Inverted Outputs The principle of the Eccles-Jordan latch, see Figure 10.14a, was developed 1919 (note the early date) as cross-coupled NAND- or NOR-gates, and was realised with radio valves (triodes). In fact it was not conceived as a latch (in those days there was no obvious use for a digitally memorising device), it was conceived as an oscillator. The Eccles-Jordan latch is probably the most unlikely candidate on which to build a fail-safe latch with two mutually inverted outputs. Of course, the latch does have two outputs, but they are not inverted, and the latch can operate in an unstable

10.5 Fail-Safe Latches with Mutually Inverted Outputs

127

Fig. 10.14 The NAND version of the Eccles-Jordan latch, (a), and its application in designing a fail-safe D-latch with two inverted outputs, (b).

state where its outputs vie for domination. So it is all the more astonishing that these drawbacks can be overcome when using an appropriate design technique. This technique is based on the composition of functions and developed in the next chapter. The fail-safe D-latch with two mutually inverse outputs, see Figure 10.14b, was developed by this technique (as shown on section 11.4).

Chapter 11

Designing Latches by Composition

Latch composition allows us to realise a given (or specified) latch f by employing a chosen latch h as a so-called hinge function, and developing a generic function g as a combinational circuit which satisfies the catenation .hg = f for the given and chosen memory functions f and h. Both the latch you wish to design, f , and the hinge latch, h, may have two or more inputs. Although the tally of their inputs may differ, you will usually find yourself working with a hinge latch of two inputs. You can use any implementation of a hinge latch, whether a feedback latch designed along the lines of the previous chapter, or a hardware latch such as a thyristor, a spool valve, or even the Eccles-Jordan latch. The Eccles-Jordan latch and its usage as a hing function is looked into quite extensively due to its omnipresence in the literature and in circuit design.

11.1 Theory of Latch Composition To visualise the problem of latch composition we shall consider in detail the example laid out in Figure 11.1 referring, in this section, to its sub-figures simply as (a), (b), and (c). The symbol f denotes the latch (or memory function) to be composed; in our example we want to compose the D-latch shown in the VK-map of (a). h is the hinge function on the basis of which f is to be composed; as shown in the VK-map of (b), the hinge latch h used in our example is a set-dominant latch. A memory function, as you will remember, maps a set of input events .E into   = {0}, {1}, {0, 1} , see (9.1), and it is the graphical representation the set .Q of this mapping that points the way to solving the composition problem. The VK-map of the latch f , the latch we want to design, is shown in sub-figure (a) together with the mapping for f . The correspondence between the mapping and the VK-map is hopefully obvious, but if not, might be clarified by the sub-section ‘Specifying a Memory Function in a VK-map’ in Section 9.2. As shown there (refer to Figure 9.3a), the mapping of the function f could also have been expressed © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_11

129

130

11 Designing Latches by Composition

Fig. 11.1 An example latch-composition used to visualise the generalisations of this section

by a VK-map. But such a representation is not easily applied to the catenation of functions. In a similar manner as for the function f , discussed above, sub-figure (b) depicts the VK-map and the mapping for the hinge function h; do take a moment or so to check the correspondence between the VK-map and the mapping for the hinge function.  and .h : Y → Q  have the same As both given memory functions .f : X → Q  we can — as indicated in sub-figure (c) — always find a function g so co-domain .Q, that the catenation hg equals f . Allow me to detail this: As f maps both input events  — see sub-figure (a) — these input events must, .(0, 0) and .(0, 1) of .X to .{0, 1} of .Q in the catenation diagram of sub-figure (c), map to the input event .(0, 0) ∈ Y, for this is the only input event which h maps to .{0, 1}. A like consideration necessitates us to map .(1, 0) ∈ X to .(0, 1) ∈ Y as the former input event, .(1, 0), is mapped to .{0} by f , while .(0, 1) is mapped to .{0} by h. An ambiguous situation arises when mapping .(1, 1) ∈ X to one of the input events .(1, 0) or .(1, 1) of .Y. As either mapping is correct, we can arbitrarily choose one of them (but of course: only one).

11.1 Theory of Latch Composition

131

The above example will hopefully help you in visualising the generalisations introduced below. To obtain a generic function .g : X → Y, we first partition its domain .X and co-domain .Y (as indicated in the sub-figures (a) and (b) of our example):   X0 := x|f (x) = {0} ,   X1 := x|f (x) = {1} ,   Xq := x|f (x) = {0, 1} ,

.

  Y0 := y|h(y) = {0} , .   Y1 := y|h(y) = {1} , .   Yq := y|h(y) = {0, 1} .

(11.1a) (11.1b) (11.1c)

These partitions are used as domains of the partial functions   f0 : X0 → {0} ,   f1 : X1 → {1} ,   fq : Xq → {0, 1} ,

  h0 : Y0 → {0} , .   h1 : Y1 → {1} , .   hq : Yq → {0, 1} .

.

(11.2a) (11.2b) (11.2c)

The domains of these partial functions, being partitions of .X and .Y, respectively, are mutually disjoint, .

X0 ∩ X1 = ∅,

Y0 ∩ Y1 = ∅,

X0 ∩ Xq = ∅,

Y0 ∩ Yq = ∅,

X1 ∩ Xq = ∅,

Y1 ∩ Yq = ∅,

while their respective unions comprise the original domains .X and .Y, X0 ∪ X1 ∪ Xq = X ,

Y0 ∪ Y1 ∪ Yq = Y.

.

As visualised in sub-figure (c) of our example, the partitions .X0 , .X1 , .Xq and .Y0 , Y1 , .Yq allow us to define the partial generic functions

.

.

g0 : X0 → Y0 ,

g1 : X1 → Y1 ,

gq : Xq → Yq .

(11.3)

From the above definitions, (11.1), (11.2), and (11.3), it follows that .

f0 ∩ f1 = ∅,

f0 ∩ fq = ∅,

f1 ∩ fq = ∅,

h0 ∩ h1 = ∅,

h0 ∩ hq = ∅,

h1 ∩ hq = ∅,

g0 ∩ g1 = ∅,

g0 ∩ gq = ∅,

g1 ∩ gq = ∅,

f0 ∪ f1 ∪ fq = f,

h0 ∪ h1 ∪ hq = h,

g0 ∪ g1 ∪ gq = g.

But, most importantly, the following logical implications allow us to choose the generic functions .g0 , g1 and .gq :

132

11 Designing Latches by Composition

.

g0 : X0 → Y0



h0 g0 = f0 ,

g1 : X1 → Y1



h1 g1 = f1 ,

gq : Xq → Yq



hq gq = fq .

(11.4)

These implications are interpreted as: Any arbitrarily chosen function .g0 : X0 → Y0 , or .g1 : X1 → Y1 , or .gq : Xq → Yq satisfies the catenation .h0 g0 = f0 , or .h1 g1 = f1 , or .hq gq = fq , respectively. Figure 11.1 is somewhat misleading in that it gives the impression that both the hinge latch h and the latch f we wish to compose on the basis of h have (only) two inputs. Actually they can have any number of inputs (greater or equal to 2), and their tally can differ. We take f to have .n ≥ 2 and h to have .m ≥ 2 inputs. The above interpretation of (11.4) allows us to formulate the following procedures whichsummerise the process of composing a given latch .f : X →    {0}, {1}, {0, 1} on the basis of a chosen hinge latch .h : Y → {0}, {1}, {0, 1} by developing a generic function .g : X → Y such that .hg = f .

.

.

Write the elements of the partial hinge sets Y1 , and Yq , which are the resetting, setting, and memorising subsets of the domain   Y of the chosen hinge function h : Y → {0}, {1}, {0, 1} as m-tuples (y1 , . . . , ym ) — and not as their decimal equivalents.   (a) Y0 = (y1 , . . . , ym ) | h(y1 , . . . , ym ) = {0} consists of the input events (y1 , . . . , ym ) of those cells of the VK-map of h containing the output symbol 0.   (b) Y1 = (y1 , . . . , ym ) | h(y1 , . . . , ym ) = {1} consists of the input events (y1 , . . . , ym ) of those cells of the VK-map of h containing the output symbol 1.   (c) Yq = (y1 , . . . , ym ) | h(y1 , . . . , ym ) = {0, 1} consists of the input events (y1 , . . . , ym ) of those cells of the VK-map of h containing the output symbol q.

(11.5)

Developing the VK-map of a generic function g : X → Y.  Draw an empty VK-map identical to the one for f : X → {0}, {1},  {0, 1} the cells of which are entered as follows: (a) If a cell x of the VK-map for f contains the output symbol 0, then the same cell of the VK-map for g contains an arbitrary m-tuple (y1 , . . . , ym ) of Y0 . (b) If a cell x of the VK-map for f contains the output symbol 1, then the same cell of the VK-map for g contains an arbitrary m-tuple (y1 , . . . , ym ) of Y1 . (c) If a cell x of the VK-map for f contains the output symbol q, then the same cell of the VK-map for g contains an arbitrary m-tuple (y1 , . . . , ym ) of Yq .

(11.6)

11.1 Theory of Latch Composition

Fig. 11.2 Demonstrating the latch composition procedures (11.5) and (11.6)

133

134

11 Designing Latches by Composition

Although it is obvious, it is worth stating explicitly that a latch designed by composition is only then fail-safe when its hinge latch is fail-safe.

11.2 Example Illustrating Latch Composition In this section we use an example of a latch composition to demonstrate the sequence of steps you need to take and the choices you must make. These steps and choices are put forth and visualised in Figure 11.2 to which we refer in this section. First, you need to decide on the hinge latch h you wish to base your latch design on, and specify this hinge latch in a VK-map, see sub-figure (a). On the one hand you use this VK-map to write the resetting, setting, and memorising subset .Y0 , .Y1 , and .Yq following the procedure detailed in (11.5). On the other hand you use the VK-map to develop the hinge latche’s logic function, preferably in its fail-safe form. In the second step, illustrated in sub-figure (b), you draw the VK-map specifying the latch f you wish to compose (on the basis of h). Next to the VK-map for f you draw an identical but empty VK-map which will be used to develop the generic function g. In general, it is good practice to copy this arrangement of f and the empty VK-map for g a number of times so that you can use the VK-map for g to develop different versions of the generic function as shown in sub-figure (c). Now, as depicted in sub-figure (c), and following procedure (11.6), we start developing the first version, .g1 , of a generic function. Cell 2 of f , containing a 0, marks a resetting input event so that any element of .Y0 can be entered into cell 2 of the VK-map for .g1 . In our example .Y0 has only one element, .(0, 1), so that this element must be used. A slightly different situation exists for cell 3, recognisable as marking a setting input event due to its output variable being 1. Into cell 3 of .g1 we need to enter an element of the setting input events .Y1 . Any element will do, and as .g1 shows, we choose .(1, 0). We could have chosen .(1, 1), which then leads to .g2 . Cells 0 and 1 of f mark memorising input events as their entries, z, are identical with the output variable, z. In each of the equally numbered cells in the the VK-map of g you can enter any element of .Yq . But .Yq containing only one element, namely .(0, 0), this must be entered into each cell 0 and 1 of any function g. Having filled-in all cells of a generic function, we calculate the logic formulas for the function. For our generic functions .g1 and .g2 these formulas are written next to the respective VK-map. Finally, you design the circuits for the generic function you wish to use, and the hinge latch, and catenate them as shown in the figure. Remember, the obtained latch f is fail-safe only if its hinge latch h is fail-safe.

11.3 Eccles-Jordan Latch

135

11.3 Eccles-Jordan Latch The Eccles-Jordan latch is one of the most commonly used and misused latches. It comes in two primary designs as shown in Figure 11.3, to which we refer in the following using only letters (a) to (f) of the sub-figures. Starting with the cross coupled NOR-latch, (a), it is most obvious that when both inputs .Y1 and .Y2 are 1 the outputs Q and .Q∗ are both 0. Next, follow the signal flow in (a) when switching from the input event .Y1 Y2 to, say, .Y1 Y2 noting that Q switches to 1. Moving on to .Y1 Y2 note that the output value (1,0) is memorised. But most importantly, when switching from .Y1 Y2 — or from .Y1 Y2 — to the memorising input event .Y1 Y2 the feedback values remain unchanged meaning the feedbacks were pre-established making the Eccles-Jorden latch fail-safe for these transitions. For the analysis of the cross coupled NAND-latch we take a more formal approach representing the obtained latch formulas in (d) and (e) and combining these two sub-figures in (f): Q ⇔ Y 1 ∧ Q∗

.

⇔ Y1 ∧ Y2 ∧ Q Q ⇔ Y1 ∨ Y2 Q

Q∗ ⇔ Y 2 ∧ Q ⇔ Y 2 ∧ Y 1 ∧ Q∗ Q∗ ⇔ Y 2 ∨ Y1 Q∗

(11.7)

Fig. 11.3 The Eccles-Jordan latch is commonly realised either by two cross-coupled NOR or two cross-coupled NAND gates as depicted. Alternatively, you can use cross-coupled IMPLICATIONS or cross-coupled INHIBITIONS.

136

11 Designing Latches by Composition

Summarising, the Eccles-Jorden latch is characterised by the following properties. (1) It has one input event, called the common input event, for which both output values are identical, which means the outputs are not negated1 . (2) If it is possible to avoid presenting the latch with the common input event, then the outputs are inverted. (3) Switching to the memorising input event from an input event other than the common input event is fail-safe. (4) Switching from the common input event to the memorising input event renders the output values undetermined their final values depending on which feedback signal is faster (one refers to this as a race condition). It follows from the above summary that making use of the Eccles-Jordan latch pivots around the possibility to prohibit the latch being presented with the common input event. When using it thus, it is an ideal hinge latch.

11.4 Using Eccles-Jordan as a Hinge Latch The procedure to follow when using an Eccles-Jordan latch as a hinge latch h to design a given latch f is put forth in the example of Figure 11.4 to which we here refer. As before we start by specifying the hinge latch we wish to use, which in this example is the Eccles-Jordan NAND-latch. But latch composition uses hinge latches with only a single output making it necessary to state explicitly which of the two outputs of the Eccles-Jordan latch we wish to use as our ‘reference output’. We chose (quite arbitrarily) the output Q which we document by drawing the EcclesJordan latch as in (a) together with its VK-map in (b). From this VK-map we read the resetting, setting, and memorising subsets .Y0 , .Y1 , and .Yq , immediately dropping .(0, 0) as we know from Figure 11.3f that it is the common input event. This ensures that .(0, 0) is never presented as input to the hinge latch making certain that it operates in a fail-safe manner. Note that each subset .Y0 , .Y1 , and .Yq now contains only a single input event so that we get one and only one solution for the generic function g as becomes obvious in the next step. In the next step, depicted in sub-figure (c), we employ procedure (11.6), as shown in slightly greater detail in Figure 11.2. As you can see, each cell of g has a uniquely defined output value to enter into it so that there is only a single generic function g which satisfies the catenation .hg = f . From the VK-map for g we calculate the function’s outputs as .Y1 ⇔ CD and .Y2 ⇔ CD. The latter formula is usually used in a modified form to minimize the number of NAND gates needed: Y2 ⇔ C D ⇔ 0 ∨ C D ⇔ C C ∨ C D ⇔ C (D ∨ C) ⇔ C D C ⇔ C Y1 ,

.

1 To call the outputs negated or mutually inverse requires them to have this property for all input events.

11.4 Using Eccles-Jordan as a Hinge Latch

137

Fig. 11.4 Composing a fail-safe D-latch using Eccles-Jordan as hinge latch

The final step is to draw the circuits as shown in the figure. You may want to check that for no input event (0,0), (0,1), (1,1), or (1,0) of the input variables C and D is the Eccles-Jordan hinge latch presented with the input event (0,0) ensuring failsafe operation, no occurrence of undefined outputs due to races, and truly inverted outputs.

Reviewing Part II

The inability of conventional switching algebra in developing a theory of latches is embarrassing considering that latches have been around in the form of the EcclesJordan circuit since 1919, not to mention their widespread use in the early computers of Zuse since the mid 1930s. From this point of view it is not surprising that the theory presented in Part II is not present in the common literature on switching theory (as is documented in a major article on electronic latches in Wikipedia) pointing to the complacency of scientists working in an established and almost stagnant field. The prevailing notion that a latch has (only) two inputs, one to set the latch, the other to reset it, realised by simple relay feedback circuits, seems to have satisfied all desire for understanding underlying principles. The idea that memorisation was in no way restricted to feedback never took hold as a theoretical concept despite its ubiquitous presence in the form of its electronic, magnetic, fluidic, mechanical, etc., occurrence. Even the notion that a latch could have multiple inputs was never seriously taken into account. It is commonly believed that feedback latches, of necessity, have a delay in their feedback as you might even want to infer (albeit incorrectly) from equation (9.8). But to specialise this formula for feedback latches, we require the output of the latch and its feedback signal to be identical according to (10.1) thus excluding the possibility of the feedback being delayed. The absence of time dependency in the (ideal) description of latches is a prerequisite in being able to use switching algebra (i.e., formal logic, which — nota bene — is non-temporal) in calculating them. The absence of delays in the feedback signals has enormous consequences, as this carries over to sequential circuits in general (even falsifying Huffman’s design method). Feedback latches have an inherent fault — they are prone to memorisation failures. But importantly, these potential failures in memorisation (these memorisation hazards) can be avoided. Pointing to this problem and its solution (i.e., making feedback latches fail-safe) is of great practical importance for industry.

140

Reviewing Part II

The design of latches by composition is important in it own rights. But apart from that, it points to the importance of composition as a mathematical tool in switching algebra; we will see it used again — this time even iteratively, i.e., repetitively — in the development of asynchronous circuits.

Part III

Asynchronous Circuits

Introduction to Part III

Sequential circuits are the backbone to all industrial and computer circuits. But only starting with Huffman [1954] did it become possible to formally specify and calculate them. This is all the more astonishing as industry was already using sequential circuits in automation and safety processes, and they were pervasive in the upcoming and quickly evolving computer generations. But still all sequential circuits of the time were developed intuitively. To act in concert and function as a unit, the many sequential circuits of a computer were synchronized which (in effect) means they were only allowed to switch with the occurrence of a pulse emitted by a central clock. These circuits quickly became known as synchronous circuits, but describing their specialized design is not a topic this book covers. The circuits Huffman examined hence forth became known as asynchronous circuits, and these and only these are considered here. They are characterized by the fact that any change of the value of an input signal is immediately acted upon (without waiting for a clock pulse). The speed with which they can operate is not hampered by the clock frequency. They are potentially the fastest circuits we can design. And as they only switch when an input event changes, they potentially have the least energy consumption minimizing heat problems. Today’s computer industry is therefor showing a distinct interest in asynchronous circuits which in the past were certainly hampered by an inadequately developed theory to design them. The prevailing belief that asynchronous circuits are difficult to design is shown to be outdated by the following chapters. In fact their design is simple and straight forward and in large parts even algorithmic. Nothing stresses the importance of asynchronous circuits more than the often overlooked aspect that all logic circuits built with electric relays, or with pneumatic or hydraulic switching devices (valves, fluidics), and even a large amount of electronic logic circuits are asynchronous.

Chapter 12

Specifying Asynchronous Circuits

We start the discussion of asynchronous circuits with an overview of four very different ways to specify them, doing so with the help of an example I choose to call Synchronization-Pulse Problem — Dietmayer [1971], page 578, Fig. 5.16. Specifying a sequential circuit has notoriously been difficult. Quite often the task which the circuit has to solve is put forward colloquially, whether verbally or in writing as in Figure 12.1a for our example. The advantage is that the problem is more readily understood by the uninitiated, and, to be honest, also by the specialist. Furthermore, the initial formulation of a practical problem will always incorporate a colloquial description that puts the problem in the context of its technical usage. The danger of a colloquial specification, apart from possibly being verbose, is that it can be ambiguous and incomplete making it hard or impossible to answer the questions “What if . . . ?”. Taking “What ifs” into account is what lies at the heart of solid engineering. In an attempt to avoid these dangers, but to keep the specification descriptive and thus easily understandable, the problem sometimes might be expressed in an events graph — see Figure 12.1b. Although the events graph does avoid ambiguity it is, even for moderately complex problems, usually an incomplete specification. Furthermore, many problems don’t lend themselves to an events graph description. But most of all, the events graph does not tell you how to design the sequential circuit. The events tree1 , conceived by Vingron [1983] and shown for our example in Figure 12.1c, is an unambiguous and complete description of a sequential problem. Simple problems can be formulated directly in an events tree, while the events graph is easily and systematically transformed into an events tree. One of the great advantages of the events tree is that it allows us to directly calculate the circuit using the theory of latches developed in chapters 10 and 11. This is where the concept of fail-safe latches bares fruit.

1 originally

called word-recognition tree

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_12

143

144

12 Specifying Asynchronous Circuits

Fig. 12.1 Four ways to specify a sequential task

The flow table, developed by Huffman [1954], is the oldest and most widely used specification for asynchronous circuits. A flow table for our example is shown in Figure 12.1d. As is the case for the events tree, the flow table can be developed from the events graph, but also quite easily and systematically from the events tree. This is indicated in Figure 12.1c by the encircled numbers of those leaves whose output values are either 0 or 1, and that are written into the flow table of Figure 12.1d augmented by encircled numbers (further stable states) that, in the events tree, would mark the path from the respective leaf to the root. Of course, simple problems can be cast directly into a flow table. But before the flow table

12.1 Developing an Events Graph

145

can be used to calculate a circuit, Huffman’s design method requires a number of transformations and interpretations of the table to be carried out. While the events graph plays a subordinate role in specification, it becomes the center of interest in a topic called verification where we want to develop a short but complete events graph by which to test a designed circuit. In general you will notice an interdependence, a back and forth, between the specifications events graph, events tree and flow table. One very important example of this is using the events graph in conjunction with the events tree or the flow table to reverse engineer a sequential circuit given as a black box.

12.1 Developing an Events Graph There is no systematic let alone algorithmic way to express a colloquially specified sequential circuit as an events graph. The best you can do is to write a sequence of input events long enough to hopefully capture all the intricacies of successive and changing input situations. Such a sequence is quite arbitrary, the main concern being that it is long enough to express all or, at the very least, most input situations you expect to encounter. After the sequence of input events has been defined, you can start thinking about the effect your chosen input sequence is required to have on the output. For simpler sequential tasks you will hardly run into difficulties (compare Figure 12.1a and Figure 12.1b), but tasks of even average complexity can get you thinking. The Passed-Sample Problem expressed colloquially in Figure 12.2 is such a task.

Fig. 12.2 An Events Graph for the Passed-Sample Problem

146

12 Specifying Asynchronous Circuits

Showing how to develop the events graph will hopefully later get you started on the synthesis of the circuit — which of course is highly recommended. The chosen sequence of input values .xS and .xD is shown in the events graph, and for the combination of these inputs we write the associated decimal values of the input event .x ∈ {0, 1, 2, 3} using .x = xS · 2 + xD and successively numbering the locations .λ. According to the verbal specification, whenever .xS = 0, the output y is also 0. This accounts for the zero output in the locations 2 to 5, then 7, 8, 9, 12, 13, 15, and so on. To see how to determine the output value y when .xS = 1 let us start at location 6. The arrow marked a leads us back to location 1 which is ‘the last time .xS was 1’. In this location the value of .xD is zero (follow arrow b). It is this value of .xD we assign to y in location 6 (follow arrow c). Corresponding arrows are shown for location 10. The remaining arrows (for the locations 11, 14, 16, 17, 18, 23, 27, 28, 31, and 33 to 35) are equivalent to arrow c. You might like to answer the question, why the initial output value is undefined for the starting input event chosen.

12.2 What is an Events Tree? A natural way to express a sequential problem formally is by way of an events tree, examples hereof are shown in Figure 12.1c and Figure 12.3. These trees conventionally grow downward and consist of nodes (the circles) and directed edges (the connections from one node to another). One distinguishes between an edge that leads to a node, a so-called input edge, and one that leads away from a node, an output edge. Each node has exactly one input edge, with the exception of a special node called the root which has no input edge. The root, together with the edges emitted from it, and the nodes to which these edges lead comprise a sub-tree which we call the base tree. All other sub-trees are called internal sub-trees, their roots being referred to as internal nodes. Some nodes, referred to as end-nodes or leaves have no output edges. Each sub-tree has .2n edges, one for each input event .x ∈ {0, 1, . . . , 2n − 1} emitted from its root. A succession of . connected edges from the root to a node (not necessarily an end-node) represents an input word .w(). An input word .w(), a sequence of input events, is an .-tuple .(x (1) , x (2) , . . . , x () ) of input events its elements listed from root to leaf, such that the first input event, .x (1) , is the present input event, the second input even, .x (2) , is the prior one, the third, .x (3) , the pre-prior one, and so on. Each node is identified by an input word, .w(), and assigned the partial output variable .yw() the value of which is induced by and exists simultaneously with the present input event .x (1) but only when the prior input event was .x (2) , the pre-prior (3) , etc. This is visualised in the partial events graphs of Figure 12.3. An events .x tree is fully specified when all leaves are assigned either 0 or 1 — but to this, there is one important exception.

12.2 What is an Events Tree?

147

Fig. 12.3 Introducing the Events Tree

Consider the root .w() = (x (1) , x (2) , . . . , x () ) of an internal tree whose partial output variable we then call .yw() . The input edge to this internal sub-tree is .x () , the last input event of the the input word .w(). One of the edges emitted from the root of this internal tree is the edge also marked by the input event .x () , this edge leading to the node .(x (1) , x (2) , . . . , x () , x () ) whose last two input events are identical. Remember that the postulate for a circuit being well-behaved (see section 1.3) states that its output may only change when the value of at least one input variable changes. This implies that a circuit’s output remains unchanged as long as the values of all input variables remain constant. As all values of the input variables of the internal root .w() = (x (1) , x (2) , . . . , x () ) and the node .(x (1) , x (2) , . . . , x () , x () ) are identical, so must the partial output variables for these two nodes be:

148

12 Specifying Asynchronous Circuits

.

y(x (1) ,x (2) ,...,x (−1) ,x () ,x () ) = y(x (1) ,x (2) ,...,x (−1) ,x () ) .

(12.1)

This is the all-important law of feedback and it leads to feedback signals in sequential circuits; in fact, (12.1) even states that every sub-tree represents a latch. Nodes whose last two input events are identical are taken to be end-nodes, nodes from which no further internal tree emits. Finally, using a very simple example we show how to develop the events tree directly from the colloquially specified problem. The Priority-AND Problem, see Figure 12.3, requires the output y to be 1 when both inputs .x1 and .x2 are 1, but only if .x1 became 1 before .x2 did. Having become 1, the output y is to remain 1 as long as both input signals remain 1. First draw the base tree, defining the input events x as .x = x2 · 2 + x1 . When .x2 is 0 (meaning x is either 0 or 1), the output y is 0 so that .y0 = 0 and .y1 = 0. When .x2 = 1 but .x1 = 0, the output is also 0 so that .y2 = 0. But, when .x2 = 1 but .x1 = 1 (i.e., .x = 3) the output .y3 , invoked by the input event 3, is undefined, its value depending on whether .x1 was 1 prior to .x2 or not. In an attempt to determine a value for .y3 , we clarify what effects prior input events .x = 0, 1, 2 to .x = 3 will have on the output y. This is done by attaching a new sub-tree to the node marked .y3 , the end-nodes of this sub-tree being marked .y3x . The output values .y30 and .y31 must both be 0 because in both cases the prior input values of .x1 were simply not present, let alone did they have a 1-value prior to .x2 . Importantly, we apply (12.1) to write .y3,3 = y3 . The partial output to have the value 1 is .y3,1 because it, per definition, stands for the output value caused as long as the present input event is 3 (.x1 = 1 and .x2 = 1), but only when this was preceded by input event 2 (.x1 = 1 and .x2 = 0). Actually, the last sentence is all the reasoning needed to define the Priority-AND problem in an events tree: Only one end-node is 1, all others must be 0.

12.3 What is a Flow Table? In an events graph of a sequential circuit you will always find at least one input event which causes the output to be 0 in certain locations, and 1 in another. Nevertheless, there is a general principal that states what output value to expect whenever a new input event is entered. This principle is best explained by the behaviour of a simple tape-reading device we call a sequential automaton. The input to a sequential automaton is an input event, the output is one of the numbers 0 or 1. The automaton reads a tape (of finite length) on which is written a program, this program defining unequivocally what new output value to expect whenever a new input event is presented as input to the automaton. The program written on the tape is called a flow table. We first look at the way the flow table is organised, and only then will we discuss how the sequential automaton reads the program and reacts to it.

12.3 What is a Flow Table?

149

Fig. 12.4 Organisation of a flow table

As Figure 12.4 shows, the program, or flow table, consists of labelled code-lines written one below the other. Each code line is referred to as an (internal) state of the automaton. The symbols used to label these code lines are chosen arbitrarily, but we usually use positive integers .{1, 2, . . . , s}, or letters of the alphabet .{a, b, c, . . . }. A calligraphic .Q refers to the set of states, e.g., .Q = {1, 2, . . . , s}, or .Q = {a, b, c, . . . }. A variable .q ∈ Q that stands for any element of .Q is called a state variable. Each code line has the same tally of cells. The leftmost cell contains the line’s label .q ∈ Q, i.e., the state variable q. The rightmost cell contains the output value .y(q) — either a 0, or a 1 — associated with the state q. The cells inbetween comprise the body of the code line, and each such cell contains a state variable .q + which itself is some element of the set of states .Q, the same set whose elements are used to label the lines of the flow table. There are .2n cells in the body of the code line, each associated with one of the .2n input events .x ∈ E, with n .E = {0, 1, . . . , 2 − 1}. The cells in any given column are associated with the same input event x. This text refers to a cell .(q, x) as the condition of the automaton, as opposed to a state of the automaton which refers to a row q of the flow table. A sequential automaton is a device designed to read a flow table. To do so, it has three reading heads placed so that, together, they can read the information in one and the same code-line of the flow table. The leftmost reading head reads

150

12 Specifying Asynchronous Circuits

Fig. 12.5 Behaviour suppressed by the requirement of stability. (a) Buzzing, (b) autonomous changes of state.

the line label, q, telling the automaton which line the other two heads are reading. The rightmost reading head reads the output value, .y(q), in the line labelled q, and sends this value to the automaton’s output. Both these heads, the leftmost and the rightmost, are stationary. The middle reading head can read the content of any cell in the body of code line q — which cell is being read is determined by the input event x, present. This implies that the middle reading head can be moved to and fro over the cells in the body of a code line as required. The sequential automaton is fed a flow table it can read as described in the paragraph above. The automaton has the ability to move the tape, or flow table, upwards and downwards so that any line on the tape can be read. The tape is always moved to the line .q + specified as content to the present cell .(q, x) being read; if .q + = q the tape remains stationary (i.e., is not moved). If a cell .(q, x), being read, contains the value q (i.e., the same label as the row label), then the automaton is said to be in a stable condition, the cell .(q, x) itself also being called a stable condition; if the cell .(q, x) contains a label .q + different to the row label q, the automaton is said to be in an unstable condition, the cell .(q, x) itself also being called an unstable condition. The sequential automaton is an idealisation determined by two things. Firstly, the tape is moved infinitely quickly from one line to another, meaning there is no time delay associated with this transition. This implies that, during the transition from one line to another, the input event x is invariant. Secondly, a flow table always abides by the principle of stability: When changing from a present condition .(q, x) to a different next-condition .(q, x + ) containing a next state .q + such that .q + = q, the condition .(q + , x + ) reached in the next state must be stable, i.e., it must have the

12.3 What is a Flow Table?

151

value .q + inscribed. This is visualised in Figure 12.4. A consequence of stability is that each column of a flow table must have at least one stable condition. Furthermore, if a column has only one stable condition, all cells in that column have the label of the stable state entered into them, i.e., every cell ‘points to’ the (single) stable state — the target state. The principle of stability effectively prohibits oscillating between states (socalled buzzing), visualised in Figure 12.5a, and autonomous changes from one state to another, an example of this shown in Figure 12.5b. Stability is the only restriction we place on the design of a flow table. But note that stability allows for rows with no stable conditions, a single stable condition, or even multiple stable conditions. When transforming a verbally specified sequential problem into a flow table, we will find it quite advantageous to restrict the number of stable conditions per row to one. A flow table that has no more than one stable condition per row is said to be a primitive flow table. In this book we will always use a primitive flow table when transforming a verbally specified sequential problem into a flow table. Do not worry about creating redundant rows — these will automatically be removed during the synthesis procedure.

Chapter 13

Employing Events Trees

This chapter is a quick introduction into specifying, calculating, and verifying simple asynchronous circuits. The appropriate tool for this task is the events tree. For one thing, it is illuminating with respect to understanding the problem it describes, then, it lends itself perfectly to an algebraic evaluation, and, finally, it enables us to develop a complete events graph by which to test a designed circuit.

13.1 From Events Graph to Events Tree In this section we show how to develop the events tree for a problem specified by an events graph. The thing to note is that the procedure is methodical, and not intuitive so that you do not need to work through dozens of examples to acquire needed experience. As an example, consider the events graph of Figure 13.1a which, of course, is our previously introduced Synchronisation Pulse problem of Figure 12.1. Start by drawing a base tree but without specifying its end-nodes. To find the value (0 or 1) to assign to the partial output .yx search the events graph for all locations of x; if the partial outputs .yx in all these locations are the same (either 0 or 1), then assign this value to .yx of the base tree; if the values are not all equal, the partial output is undefined making that node an internal node. Apply this to the base tree of Figure 13.1b. The input event .x = 0 occurs in the locations .λ = 1, 3, 9, 11, 17; the output values of y in these locations are all 0 so that .y0 = 0. .x = 1 in the locations .λ = 4, 6, 8, 14, 16; the associated output values of y in these locations are again all 0 making .y1 = 0. Likewise, you will find that .y2 = 0 the output y being 0 wherever .x = 2. Finally, .y3 cannot be assigned a constant value (either 0 or 1) and thus remains variable; .x = 3 in the locations .λ = 5, 7, 13, 15 but the associated output values of y vary being 1 in the locations .λ = 5, 15 and 0 in the locations .λ = 7, 13. .y3 is an internal node. Attach a new sub-tree to an internal node .yx (1) ,x (2) ,...,x () , again, without specifying its end-nodes .yx (1) ,x (2) ,...,x () ,x (+1) . Search the events graph for all occurrences © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_13

153

154

13 Employing Events Trees

Fig. 13.1 Developing the events tree from the events graph of the Synchronization Pulse problem

of the input word .(x (+1) , x () , . . . , x (2) , x (1) ) noting the output values of y in the locations .λ = x (1) . If all these values are equal, then .yx (1) ,x (2) ,...,x () ,x (+1) in the events tree is given this value. If the values differ, .yx (1) ,x (2) ,...,x () ,x (+1) remains undefined and is thus a new internal node. Continue this procedure until no new internal nodes are found. We now apply the above procedure to develop the tree of Figure 13.1c. After attaching the sub-tree .y3 to node .x = 3 we first search the events graph for occurrences of the input word (0,3), i.e., where the present input event .x = 3 is preceded by .x = 0. This sequence of input events does not occur in the events graph and we mark this with a dash for .y30 in the events tree. Next, we search the events graph for the input word (1,3). This input word occurs in present locations .λ = 5, 7, 15 where the present output values of y are 1 for .λ = 5, 15, but is 0 for .λ = 7. These output values differing, node .y31 of the events tree is an internal node. The next node, .y33 , equals .y3 according to equation (12.1) and is thus an end-node. The last input word we need to search for in the events graph is (2,3). It only occurs once, its present input event being in location .λ = 13) where the value of y is 0, this being the value assigned to .y32 . Completing the events tree of Figure 13.1d will now hopefully pose no further problem, but doing so is important to make sure you understand the above procedure.

13.2 Calculating and Drawing the Circuit

155

13.2 Calculating and Drawing the Circuit Possibly the greatest advantage of the events tree is the ease with which it enables us to design the circuit, i.e., to calculate and draw it. The edges of a sub-tree correspond to the cells of a VK-map while their nodes are the variables (or values) entered into the cells. Thus, the sub-trees y, .y3 , .y31 , and .y313 of Figure 13.2 correspond to the VK-maps Y , .Y3 , .Y31 , and .Y313 , respectively. The last of these VK-maps clearly represents a latch and is as such easily calculated. The two-input VK-maps for .Y3 and .Y31 also represent latches which becomes obvious when they are expanded to three-input VK-maps as shown. These examples stress how important it was to develop the theory of latches quite generally, e.g., not restricting ourselves to latches with only two input variables. Evaluating the VK-maps provides us with Y ⇔ X1 X2 Y3

.

Y3 ⇔ X2 (X1 ∨ Y31 )(X1 ∨ Y3 ) Y31 ⇔ X2 ∨ X1 Y313 ∨X1 Y31 ⇔ Y313 ∨ X1 Y31    Y313

Y313 ⇔ X2 ∨ X1 Y313

Fig. 13.2 Calculating a circuit from an events tree

156

13 Employing Events Trees

Fig. 13.3 Drawing a circuit for the Synchronisation-Pulse Problem of Figure 13.1.

Drawing the circuit according to the formulas developed is straightforward. First, see Figure 13.3, draw the circuit for each formula separately starting with the highest order variables and (importantly) marking each input lead with the appropriate variable name, but drawing no feedback loops as yet. Now, in the second stage, connect all equally named leads giving you the final circuit which notably automatically contains all feedback loops.

13.3 Direct Specification of Feedback The simplest sequential circuits are latches, but trying to describe their behaviour in an events tree that stops growing with all its end-nodes being assigned either 0 or 1 (as we were able to do in section 13.1) quickly proves to be impossible. We show this using the D-latch, the primus inter pares of lathes, and at the same time showing when to stop grafting sub-trees, i.e., attaching higher order sub-trees. The D-latch is specified by the events graph of Figure 13.4a which was originally discussed in section 9.1 where it was introduced as the sampling latch. When developing its events tree (see Figure 13.4b), start by drawing (only) the base-tree.

13.3 Direct Specification of Feedback

157

Fig. 13.4 D-Latch

To find the value (either 0 or 1) to assign .y0 , search the graph for the values of the output y in those locations .λ where .x = 0. As these values differ, .y0 is undefined necessitating the grafting of an higher order sub-tree in this position. Likewise, you will find .y1 to be undefined, leading to grafting a sub-tree to this node. Only the values for .y3 and .y2 prove to be constant, 1 and 0, respectively. In defining the values for the end-nodes .y00 , .y01 , .y03 , .y02 we note that .y00 = y0 – according to equation (12.1) – and having excluded simultaneously changing input signals in the events grpoh, .y03 remains undefined. .y01 is the output value observed in those locations .λ where the input event x is 0 but its prior input event was 1. Although the value of .y01 is not constant over all locations, we do note that its value always equals its prior output value, .y01 = y1 . Finally, .y02 = 0. The specifications for the end-nodes .y00 , .y01 , .y03 , .y02 of sub-tree .y0 are shown in Figure 13.4b. Like deliberations lead to the sub-tree .y1 of the figure. The roots of sub-trees with identical end-nodes have identical values making these sub-trees identical; this holds for all sub-trees, not only for the base tree. Thus, assuming or choosing .y03 to be 1, and .y12 to be 0 makes the sub-trees y, .y0 and .y1 identical, i.e., .y = y0 = y1 , which leads to the events tree of Figure 13.4c. A sub-tree found to be identical to another (in the same or a lower level) is not expanded any further as the information gathered would merely be repetitive. The obtained insight allows us to add an option to the way to define or specify the value of end-nodes of the base tree: .yx is specified when it is 0 in all locations .λ

158

13 Employing Events Trees

of the events graph where the input event x is present; .yx is specified when it is 1 in all locations .λ of the events graph where the input event x is present; .yx is specified when its value in all locations .λ where x is present equals the value of the respective prior output value, the value in the location .λ − 1. With this definition we can develop the events tree of Figure 13.4c directly from the events graph obtaining a representation that expresses feedback directly. You will no doubt notice that the input events .x = 0 and .x = 1 are expressed by .xC being 0 while the input events 3 and 2 are expressed by .xC being 1, ultimately leading to the further reduced events tree of Figure 13.4d. A compositional and fail-safe design of the D-latch was shown in Figure 11.2. To demonstrate the advantage, even necessity, of expressing feedback directly, we develop the events tree for the D-flipflop of Figure 13.5 to which we now refer. The D-flipflop is frequently used for synchronizing a data signal .xD with a clock signal .xC passing only the synchronized version of the data signal, the output y of the flipflop, on to a synchronous circuit. The input-output behaviour is expressed colloquially in sub-figure a but it is probably not immediately obvious how to develop the output signal for the given sequence of input events. This hopefully becomes clear by following the arrows in sub-figure b in which an events graph is developed. To draw the events tree of sub-figure c, you first need to specify the variables .y0 , .y1 , .y3 , .y2 of the base tree y. While both .y0 and .y1 vary over the range of locations .λ in sub-figure b, their values always equal the respective prior output values allowing us to state that .y0 = y and .y1 = y letting us write the variable y for .y0 and .y1 in the base tree of sub-figure c. While both .y3 and .y2 also vary over the range of .λ (e.g., .y3 = 0 in the locations .λ ∈ 8, 15, 29, 31, and 1 in the locations .λ ∈ 3, 17, 23, 25) their momentary values are not always identical with their prior values (e.g., meaning .y3 (λ) is not always the same as .y3 (λ − 1). Thus these nodes must be used as (internal) roots of grafted sub-trees in the hope of determining the values of the variable .y3 and .y2 . The nodes of the first-level sub-tree .y3 are .y30 , .y31 , .y33 , .y32 , the first two having the values 0 and 1, respectively (e.g., .y30 occurs in the locations .λ ∈ 8, 15, locations in which the output is 0). By (12.1) .y33 = y3 leaving us only to consider .y32 . While the values for .y32 vary (see locations .λ ∈ 25, 29, 31) they are always identical to their prior output values .y2 allowing us to write .y32 = y2 . Likewise, .y23 = y3 . Finally, choosing the value 1 for .y21 , making the end-nodes of .y3 and .y2 identical, allowing us to state that .y3 = y2 . Now the events tree of sub-figure c can be reduced to that of sub-figure d. The reduced events tree of Figure 13.5d is directly transformable into a logic circuit: The base tree is a D-latch (see Figure 13.4d) the data input of which is the output .y3 of a D-latch the clock input, .yC , of which has been inverted. It is prudent to pay special attention to inverted feedback signals as they can potentially cause oscillations or, when algebraically handled carelessly, even produce contradictions (of the form .Y ⇔ Y ). The prime example of a circuit with

13.3 Direct Specification of Feedback

159

Fig. 13.5 D-Flipflop

an inverted feedback is the toggle circuit, its technical implementation being called T-flipflop. As shown in the events graph of Figure 13.6a it has only one input and one output, its value changing every time the input is activated. When developing the T-flipflop’s event tree, it is advantageous to start with the input x being 1. You will notice that the output varies, but is not equal to the respective prior output value so that we need to graft or append the sub-tree .y1 its end-nodes being .y10 and .y11 . While .y11 = y1 , you will note from the events graph that .y10 is always equal to the inverted value of its prior output value .y0 , i.e., .y10 = y0 . Now turning to the the input .x = 0 we note from the events tree that .y0 = y allowing us to redraw the tree of Figure 13.6b as that of sub-figure c. The associated circuit is that of a D-flipflop whose data input .xD is the inverted output .y of the circuit.

160

13 Employing Events Trees

Fig. 13.6 T-Flipflop

13.4 Verification versus Reverse Engineering The problem first considered in this section is this: Having designed an asynchronous circuit according to a given events tree, we next want to verify our logic design by developing a minimal length events graph, a so-called verification graph, with which to completely test the circuit’s input-output behaviour. Testing actual technical realisations is sometimes a necessity because even faultlessly developed sequential circuits can come up with unpleasant surprises. A verification graph is developed from a verification tree which itself is developed from the events tree. In principle, the verification tree contains those and only those nodes of the events tree to which constant values, 0 or 1, or don’t cares are or can be assigned, but dropping all end-nodes for which (12.1) holds, and all nodes which refer to simultaneous changes of multiple input values. Some problems, such as the synchronization-pulse problem of Figure 13.7, lead to verification trees of finite height, with all their end-nodes having constant values. The verification tree was developed from the events tree of Figure 13.1d by dropping .y33 of sub-tree .y3 and node .y30 (the latter, to avoid simultaneous input changes); in a like manner, the higher input sub-trees were pruned. Each end-node is listed as a sequence of inputs and associated outputs, the top row consisting of the sequence of one or more input events leading to the end-node, the bottom row consisting of the associated sequence of output values. The output values or sequences associated with prior inputs are always taken from lower level input-output sequences (where possible and as pointed out by the arrows of subfigure b). Special notice needs to be payed to the sequence (1 3 1 3) to which we can assign the present (rightmost) output value 0, but where we run into difficulties with the prior output values: For one thing, the input sequence (1 3 1) doesn’t occur in level 3; nor do the input sequences (1 3) or (3 1) occur in level 2. In level 1, the only 1-element input sequence we can use is the input event 1, with its associated output value 0. The still needed output value for input event 3 isn’t listed in level 1, and thus remains undefined (characterized by a dash).

13.4 Verification versus Reverse Engineering

161

Fig. 13.7 A verification tree of finite height — the synchronization-pulse problem

All the obtained input-output sequences of sub-figure b must now be chained together to create a continuous events graph as shown in sub-figure c. This can be done in a number of ways but taking into account that the sequence of input events must be chosen so that when switching from one to the next, only one input variable changes its value. Sometimes this is only possible b using level-1 or level-2 inputoutput sequences to connect one sequence with the next one — these connecting sequences are referred to as glue. Finally, in the example of the synchronizationpulse problem, the undefined value (marked by the dash) in sub-figure c must be replaced by the value 0. The reasoning for this choice being: If the circuit were to produce a 1-output in the emphasized location, this would run contrary to the required behaviour of the circuit. Not all problems lead to verification trees whose end-nodes all have constant values thus terminating the tree’s growth. A typical example of a verification tree that keeps growing is that of a latch. Figure 13.8a shows the events tree of the D-latch, while sub-figure b shows the first preliminary expansion towards the verification tree — this has non-constant values for the nodes .y01 and .y10 . Grafting the appropriate sub-trees again and again leads to the verification tree of sub-figure c. The question is, when do we stop grafting? In these cases, sub-trees build or form periods as graphically emphasized in sub-figure c. Periods are recognized as periods by their repetition. After 2 periods we stop grafting sub-trees as no further information is obtained. The remaining development follows the previous example, the only difference being the use of level-1 input-output sequences as glue, to form a proper transition from one level-4 input-output sequence to the next. The two different verification graphs of Figure 13.8d and Figure 13.8e point to the fact that the succession in which the level-4 input-output sequences are used is irrelevant.

162

13 Employing Events Trees

Fig. 13.8 Pruning an infinitely growing verification tree — the D-latch

Finally, we take a brief look at the concept of reverse engineering. Suppose you are given a switching circuit with two inputs and one output as a ‘black box’ and are asked to find, non-invasively, how it works or what it does. This is the basic problem of reverse engineering and is often relevant in a military, replacement, or competitive context. To solve the problem, you design, i.e., choose more or less arbitrarily, a sequence of input values to apply to the circuit being analyzed, and measure the associated output values. You thus obtain an events graph giving you a first impression of the circuit’s input-output behaviour. Suppose you have, by this process, obtained the events graph of Figure 13.9. Importantly, you need to choose a sufficiently long sequence of input events. To analyze the events graph, you develop from it the events tree as discussed in section 13.1, this tree being the first preliminary result of the revere engineering task. You will, wisely, double check

13.4 Verification versus Reverse Engineering

163

Fig. 13.9 Using the events graph of the past-sample problem to demonstrate the process of reverse engineering

this result a number of times using different and possibly longer input sequences. When you reach a point where you always obtain the same events tree, the circuit you can calculate from it is a work-alike to the circuit in the black box. If you get varying events trees, you will have to choose which suits your purpose best.

Chapter 14

Flow Table Development and Usage

Huffman’s flow table originated from the analysis of sequential circuits built with electric relays, the tables showing the path to the specification and subsequent design of such circuits. But the flow table per se does not hint at how to design the circuit. Each of the steps Huffman proposed — finding equivalent states, merging them, and encoding them — entail multiple choices some of which are quite arbitrary, and each of which leads to a different circuit realisation. Conventional switching theory never deviated from Huffman’s 3-step approach; instead, a lot of research was put into refining them with the intent of finding algorithmic ways to obtain results. All in all, these efforts were not very successful cementing the impression that designing asynchronous circuits was a complicated, highly specialised endeavour. This chapter on the other hand is a complete break with Huffman’s 3-step approach in the use of the flow table to design a circuit. The design is based on so-called iterative catenation. But before we turn to this design method we take a quick look at how to obtain the flow table.

14.1 Direct Specification of a Flow Table Describing a colloquially specified sequential problem directly in a flow table is a quite intuitive and sometimes tedious process as even the following very simple example of the priority-AND task of Figure 14.1a shows. This will let you appreciate the more formal approaches presented in the next section especially when the problems you want to describe get more complicated. Sub-figure (b): We start by drawing a primitive flow table for the number of input events (if the problem has 2 input variables, the body of the flow table will consist of 2 .2 = 4 columns), without as yet deciding on the number of states (rows). According to the problem’s specification the output y can, but need not be 1 when both inputs are 1 (column 3), so that column 3 must at least contain two stable condition or cells — one to which we assign the output value 0, the other being assigned the output © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_14

165

166

14 Flow Table Development and Usage

Fig. 14.1 Developing the flow table for the Priority-AND problem directly from its verbal specification.

value 1. The problem states implicitly that y is 0 when at least one of the input variables is 0. This is accounted for by assigning the value 0 to the output variable y in those rows whose stable conditions mark the columns 0, 1, and 2. Sub-figure (c): We next decide on what GOTO commands to enter into the still empty cells of column 3, sub-figure (b). This is done on a row by row basis. The choice, of course, is whether to enter the label c or d into a given cell. A GOTO-c command will cause a 0 output value, while a GOTO-d command will make the output become 1. When switching from the present input event (0,0) to (1,1) we implicitly decided in Figure 12.3 that the output should be 0 so that we enter a GOTO-c command in cell .(a, 3). When switching from column 1 to column 3, .x2 is already 1 so that .x1 now changes its value after .x2 has become 1 — in cell .(b, 3) we must thus enter a GOTO-c command. Finally, when switching from column 2 to column 3, .x1 has become 1 prior to .x2 which only now changes its value from 0 to 1 — to obtain a 1-output, we must enter a GOTO-d command into cell .(e, 3). Sub-figure (d): The stable states a, b, and e have in common that they are, per verbal specification, associated with an output value of 0, whatever the prior input event was. Therefore, in the column containing stable state a, each of the other cells must contain a GOTO-a command. A like argument holds for columns b and e. As all cells in the flow table have GOTO commands entered, the sequential problem is completely specified, the development of the flow table completed.

14.2 Events Graph to Flow Table Developing a flow table directly from a verbal specification can be cumbersome, and for only sightly more intricate problems such as the passed-sample problem of section 12.1 even error prone. On the other hand, transforming a given events graph directly into a flow table is systematic, something that can be accomplished by a

14.2 Events Graph to Flow Table

167

computer program. The following two sub-sections put forth two logically different methods.

Vingron — Deterministic Sub-Sequences of Input Events Following the design principle of the events tree, it stands to reason to search the events graph for those and only those sub-sequences of input events that always have one and the same definitive partial output (be it a 0 or a 1) — we call such sub-sequences deterministic. Each deterministic sub-sequence, together with its partial output value, is transferred to the flow table. To demonstrate this method, we transform the events graph of Figure 14.2a to the flow table of Figure 14.2b starting with the shortest sub-sequences. A single input event is the shortest sub-sequence. Searching the events graph for all occurrences of the input event 0, we see that all output values it invokes are 0. In state a (i.e., row a) of Figure 14.2b we enter the stable (i.e., encircled) state a in column 0, and the invoked output value 0 in the output column y of row a. Likewise, the input events 1 and 2 are deterministic leading to the stable states b and c. The input event 3 is not a deterministic sub-sequence as it can cause either a 1-output (in the locations 5 and 15), or a 0-output (in locations 7 and 13) of the events graph, and is thus not entered as a deterministic sub-sequence. The effect the non-deterministic input event 3 has is next studied in connection with its prior input event. Of the possible dual input events .(0, 3), .(1, 3), .(3, 3), .(2, 3) we need to search for, only .(1, 3) and .(2, 3) occur doing so in the present locations 5,7,15 and 13, respectively. Only .(2, 3) is deterministic leading to rows d and e of the flow table. The partial output value 0 created by .(2, 3) is entered in row e of the column of output values y. The output value in row d can and must be identical to that of stable state c. The effect the non-deterministic dual-input event .(1, 3) has is next studied in connection with its prior input event. Presumably, the above explanations suffice to see how Figure 14.2b was developed. It should be noted that the method is algorithmic, i.e., can be programmed.

Sandige — Transmitting all Columns of the Events Graph While Vingron’s method is closely related to the concept of the events tree, Sandige [1990]1 proposed a simpler and more practical way to develop a flow table from an events graph, and it is of course this method that is recommended for general use. It is demonstrated in Figure 14.2, to which we now refer, using the SynchronisationPulse Problem of Figure 12.1.

1

Not having found any other reference to this method, I attribute it to Sandige.

168

14 Flow Table Development and Usage

Fig. 14.2 Developing flow tables directly from the given events graph of sub-figure a. Subfigure b is the flow table developed by Vingron’s method, while sub-figure c is the flow table developed using Sandige’s method. The events graph is that of the Synchronisation-Pulse Problem of Figure 12.1.

To achieve the transformation from an events graph to a flow table the obvious but important fact must be stressed that each column of the events graph represents a stable condition of the circuit described. Next: The simple but important difference in drawing the (still empty) flow table is only in the way its (internal) states or rows are named. The rows are named or labelled successively, from top to bottom, by the labels .λ used to mark the locations in the events graph. Now, let’s see what to inscribe into a row of the flow table. In row .λ = 1 of the flow table, the stable condition, marked by an encircled 1, is inscribed where this

14.3 Circuit Design by Iterative Catenation

169

row intersects with column .x = 0, the stable input event in the events graph where λ = 1. The output 0 in the output column y of the flow table is taken from the output value y of the events graph in location .λ = 1. The final entry in the row is the command to go to the next stable state found in the events graph, which is in location .λ = 2. This next location, the number 2, is entered into the flow table in the intersection of row .λ = 1 and column .x = 2, which is the input event in the next location. In a like manner, you transfer all locations of the events graph to the flow table. Note the many unspecified cells in this table, the cells with no go to commands entered. Most importantly, note that the flow table of Figure 14.2c, just obtained, is completely different to that of Figure 14.2b posing questions of validity and ambiguity, questions addressed in section 14.5. Now having discussed, at least briefly, how to obtain a flow table, we come to the actual subject of this chapter: How do we develop a circuit from a given flow table?

.

14.3 Circuit Design by Iterative Catenation Circuit synthesis by iterative catenation2 is a complete break with the conventional design methods based on the flow table. The heart of the procedure — iterative catenation — is explained in Figure 14.3 for the example of the priority-AND problem. 1. Detach the column of output values from the flow table giving you sub-figures a and b. Sub-figure a is usually called the transition table, while sub-figure b is the (Moore) output table. 2. As shown in sub-figure c, reproduce the transition table replacing the name of every state variable by the associated output value taken from the output table, also renaming the state variable to the name of the output variable. The table obtained is called a self-reference table. To avoid contradictions of the form .Y ↔ Y , every column of the body of the self-reference table must contain either a single constant value (either a 0 or a 1), or must be row-wise identical with the values of the output column; if the latter is not the case for a certain column of the body of the self-reference table, the column is said to be contradictory. Column 3 is a contradictory column. There can be more than one contradictory columns. 3. A contradictory column is used to create a new output table, see sub-figure e, allowing us to build a new self-reference table, sub-figure f, which is analysed as stated above. 4. The process terminates when no further self-reference table can be constructed that has a contradictory column.

2 developed

by Vingron [2004].

170

14 Flow Table Development and Usage

Fig. 14.3 Demonstrating the process of iterative catenation to obtain all individual self-reference tables.

Fig. 14.4 Interpreting each self-reference table as a sub-tree of the events tree.

Self-reference is usually shunned in mathematics, but it is a very powerful tool when you can avoid contradictions3 .

3 Possibly the best known instance of self-reference is by God giving his name, i.e., defining himself as “I am who I am”.

14.4 Compound Self-Reference Table

171

Each self-reference table corresponds to a sub-tree of an events tree. This is illustrated in Figure 14.4 which we now take a closer look at. Each column (or vector) of the body of a self-reference table contains the values of the circuit’s output when the input event marking the column is present. For instance, considering subfigure a the output of which is y, the values of column .x = 0 are the values of the partial output variable .y0 . Similarly in sub-figure b whose output is .y3 : The output values in column x are those of the vector or partial outputs .y3x . This allows us to draw the events tree of sub-figure c. Calculating the circuit from this tree, we get Y3 ⇔ X1 X2 ∨ X1 Y3

.

⇔ X1 (X 2 ∨ Y3 ) Y ⇔ X1 X2 Y3 ⇔ X1 X2 X1 (X 2 ∨ Y3 )    Y3

⇔ X2 X1 (X2 ∨ Y3 )    Y3

⇔ X2 Y3 This calculation finalises the circuit’s design, again demonstrating the value of catenation. In the next section we take a closer look at the self-reference tables bringing them closer to the standard procedures in which merging equivalent states and encoding the states play such a dominant role.

14.4 Compound Self-Reference Table As can only be expected, there is a strong connection to the merging problem and the state assignment (i.e., encoding) problem of Huffman’s method. Both these problems are covered by the so-called compound self-reference table which is developed by combining the individual self-reference tables as shown in Figure 14.5a where the individual self-reference tables Figure 14.4a and Figure 14.4b are combined. This table constitutes the binarily encoded flow table, relieving you of the necessity to find an appropriate encoding. More often than not, these tables have redundant rows, rows that are mutually identical. From each group of identical rows drop all but one row as shown in sub-figure b giving you the compound selfreference table with the least tally of rows (or states). Each compound self-reference table is a truth table with multiple outputs. To calculate each output it is convenient to split the multi-output truth table into as many single output Veitch-Karnaugh maps as there are outputs. For our example,

172

14 Flow Table Development and Usage

Fig. 14.5 Combining the individual self-reference tables to create the compound self-reference table and calculating the circuit.

this is shown in sub-figures c and d. The calculation of their outputs is identical to that of the previous section. The formulas developed there lead to the circuit subfigure e. The point of this section is actually not the (repeated) development of the circuit — it is actually and simply the introduction of the compound self-reference table which is put to use in the next section to test whether two flow tables are equivalent.

14.5 When Are Two Flow Tables Equivalent? Flow tables are quite ambiguous. For one thing, a given sequential problem can be described by a whole number of very different flow tables. To proliferate even further, one and the same flow table can lead to a number of algebraic solutions when using the standard design techniques of finding equivalent states, reducing and encoding them. Thus, the question arises: When are two different flow tables

14.5 When Are Two Flow Tables Equivalent?

173

Fig. 14.6 Developing the compound self-reference table for the flow table of Figure 14.2b and the partial outputs specified by the columns of the compound self-reference table.

equivalent — equivalent in the sense that they describe the same input-output behaviour of a circuit? We answer this question by stating the following Conjecture: The input-output behaviour of a sequential circuit is uniquely described by an events tree, so that two circuits with the same events tree have the same input-output behaviour making these circuits and their flow tables equivalent. For instance: Suppose you are presented with the two flow tables of Figure 14.2bc and need to decide if they are equivalent. To do this you develop the respective compound self-reference tables as is done in Figure 14.6 and Figure 14.7. If these tables have identical partial outputs — as is the case in Figure 14.6 and Figure 14.7 — they have identical events trees making the flow tables equivalent. Not surprisingly, the events tree is, in this case, the one pictured in Figure 12.1c.

174

14 Flow Table Development and Usage

Fig. 14.7 Developing the compound self-reference table for the flow table of Figure 14.2c and the partial outputs specified by the columns of the compound self-reference table.

Reviewing Part III

The reader versed with asynchronous sequential circuits will have found little resemblance between the conventional theory and the theory presented in part III. This stems from the marginalisation of Huffman’s flow table approach which is due to the method’s inadequacy with respect to solving the synthesis problem of sequential circuits. If, for some reason, you want to start with a flow table (which can be absolutely legitimate), then the reasonable thing to do is to apply iterative catenation to obtain self-reference tables from which to develop the events tree allowing you to directly calculate the circuit. This avoids problems you need to confront in Huffman’s method with merging and, most notably, with the encoding of the internal states. If you want to stay closer to Huffman’s method, you can combine the individual self-reference tables to a single combined self-reference table noting that it is a pre-encoded state-transition table. To date, the events tree is the most powerful tool to synthesise a sequential circuit avoiding all pit-falls of Huffman’s method. It does of course rely completely on the theory of feedback latches discussed in chapter 10. Contrary to this being a disadvantage, it only points to the flawed condition of common switching algebra where a theory of latches has never been developed. With respect to time-delays in sequential circuits (and latches), it must be pointed out that Huffman’s theory relies on the delays in the feedbacks having a finite (a non zero) value. With the method of the events tree, feedback is introduced not only as a theoretical necessity, by equation (12.1), but also as having no time-delay. It is this idealisation of time independence that enables the application of switching algebra (which is nota bene non-temporal) to the calculation of sequential circuits. The key to solving the synthesis problem of asynchronous circuits is the use of iterative catenation (see section 14.3) again pointing to the importance of catenation in the theory of switching circuits. The method of iterative catenation is algorithmic, allowing for a computerised encoding of the internal states of a flow table. A novelty is the approach to the equivalence of flow tables, which in the language of automata theory refers to the question of two automata being

176

Reviewing Part III

equivalent. Should the conjecture on the equivalence of flow tables be correct, this would put the events tree in the center of circuit synthesis. As self-reference plays such an important role in iterative catenation, I cannot resist adding a very pretty problem that can only be solved by self-reference (albeit, a problem that has nothing to do with switching algebra). We want to construct a stable platform in the center of a square, a platform consisting of 4 knives, each resting with one end on an upside-down water glass placed in the corner of the square. The size of the square is chosen so that the knives when placed along the sides of the square touch the glasses placed in the corners. No ancillaries of any kind may be used. If you want to find the solution yourself, then interrupt your reading here. The self-referring solution: To facilitate the explanation think of the glasses numbered successively. Place the first knife with its handle on glass 1 having the knife point roughly towards the center of the square; support the knife’s blade in that position. This knife is now resilient (can carry a weight) as you are supporting its loose end. Place the next knife with its handle on glass 2 and with the blade on knife 1. Knife 2 is now also resilient as it is supported by resilient knife 1. Continue in a like manner with knives 3 and 4, but resting knife 4 on knife 1 (which you are still supporting). Finally, apply self-reference: Before removing your support of knife 1, rest its blade on knife 4 (which is, after all, resilient) interweaving the knives and allowing you now to remove the support of knife 1 leaving you with a resilient platform in the center of the square. Each knife supports its successor, the first knife being the successor of the last.

Part IV

OCTOPUS A Programming Language for PLCs

Introduction to Part IV

The first three parts of this book — on combinational circuits, memory circuits, and sequential circuits — present the more abstract or theoretical aspects of switching algebra and circuit design. But their application to real-world industrial problems is limited: Try imagining an events tree, or a flow table for, say, 20, 50, or 100 input variables — this is obviously nonsensical. Designing digital circuits for the automation of industrial purposes must follow a completely different philosophy. The basic idea for such a philosophy must be to use pre-designed and standardised modules that can be put together in a LEGO-like fashion with no or very little knowledge of switching algebra. This part of the book presents a realisation of such a philosophy for transition controlled processes governed by programmable logic controllers. The system of LEGO-like modules presented is called OCTOPUS for its distributed intelligence. A Programmable Logic Controller, a PLC, is a computer designed to control industrial automation processes and as such has a large number of inputs and outputs connecting it to the process and is of course designed and built for a rugged environment. The only processes to be considered here are so-called transition controlled processes. These send binary signals to the PLC, signals that contain all necessary information on the status and progress of the process. The PLC has a program that acts on these input signals calculating binary output signals it sends back to the process determining its further progress. Today’s PLCs (usually) abide by the standard IEC 61131-3 to program them in any of five standardised languages. One of these languages, the general purpose language called Function Block Diagram, FBD, is built around logic gates allowing the programmer to combine these into reusable modules. OCTOPUS consists of such function blocks or modules and is thus an extension to the FBD language.

Chapter 15

Historical Remarks and Programming Overview

The beginnings of the Programmable Logic Controller go back to the years 1958 to 1960 when Odo Josef Struger headed a team that developed Allen-Bradley’s Programmable Logic Controller, a term Struger is credited with having coined. Struger eventually went on to become the vice president of Allan-Bradley that is now the leading producer of PLCs in the USA. In 1968 Dick Morley invented, independently of Struger, a PLC for General Motors, the Modicon 084. PLCs were conceived as electronic replacements for racks of hard-wired relays. These were difficult to service and very difficult to adapt to changing requirements. But the legacy of these relay circuits carries over even to today’s PLCs that can still be programmed as if connecting electric relays of old. In the first section we take a look at the standardized languages used to program PLCs. The second section presents the background to OCTOPUS.

15.1 IEC 61131-3 Programming Languages The PLC is a computer and as such needs a program to tell it what to do. Industry, in 1992 with the introduction of IEC-61131-3, agreed on standardizing a number of programming languages to use when writing programs for PLCs. These languages can be used together in one over-all program and, ideally but not in actual fact, be moved without alterations from one PLC brand to another as long as the PLCs comply to the IEC-61131-3 specifications. The five languages standardized are: • • • •

Instruction List (IL) which is modeled along the lines of an assembler language; Structured Text (ST), a programming language along the lines of PASCAL; Ladder Diagram (LD), modeled after the graphic representation of electric relays; Function Block Diagram (FBD), modeled after the graphic representation of logic gates; • Sequential Function Chart (SFC), a modified Petri gragh.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_15

179

180

15 Historical Remarks and Programming Overview

The first four languages are general purpose languages meaning they can be used to develop any combinational, memory, or sequential circuit of the kind discussed in the previous parts of this book. But none of these languages hint in any way at how to develop a circuit or program that gives answers to the following basic questions or tasks: 1. How do you realize a certain sequence of actions or steps of a process? 2. How do you start and run the sequence once and only once (Single Cycle Mode)? 3. How do you start and run the process multiple times, only allowing it to stop at the end of a completed cycle (Repeated Cycles Mode)? 4. How do you interrupt a running cycle, i.e., bring it to a premature standstill at the end of the running step (Cycle Interrupted Mode)? 5. How do you continue an interrupted cycle at the point of interruption? 6. How do you manually but successively activate one step of the process after another, forcing the process to interrupt after each step, and do so from the first step onward, as after an interrupt? 7. How can you define and implement a generally valid emergency stop (EMERGENCY STOP Mode)? 8. How do you realize a MANUAL Mode of operation so that you can switch safely from automatic mode (i.e., Single Cycle or Repeated Cycles) to manual mode and back? 9. How do you RESET the control circuit (i.e., put the circuit back into its initial state), say, after an Emergency Stop? 10. After all required modes of operation have been developed, you must still answer the question: How do you switch safely from one mode of operation to another? 11. How, for the operator of the process, do you visualize what mode the process is in? The Sequential Function Chart, the SFC, was a reprieve in that it alone of the IEC 61131-3 languages offers a pre-established solution at least to the first task by enforcing a sequence of steps. This language also makes it easier to develop solutions for the above tasks 2 to 6. But these solutions are not part of the SFC language, they are always developed anew for each occurring automation problem and always employ one or more of the general purpose languages IL, ST, LD, or FBD. In other words, a program code developed for one problem can seldomly be reused for another problem. The really tricky problems concerning EMERGENCY STOP, MANUAL MODE, and RESET are not touched on in any general form either in the literature, by PLC producers, or by OEMs. To be able to consider the above automation tasks in any general form, we of course need to specify the kind of processes being considered — we call them Transition Controlled Processes. These are repetitive processes that can always

15.2 The Precursor to OCTOPUS

181

be broken down into distinct and elementary sub-processes1 , one following the other in a (usually) predefined succession. The completion of one elementary subprocess initiates the start of the next. To achieve the transition from one elementary sub-process to another, each creates a so-called transition signal2 indicating the termination of the elementary sub-process, and allowing (or forcing, if you will) the start of the next one. Each sub-process and elementary sub-process is implemented or carried out by an actuator3 , these being governed by the output signals (the actions) of the transition controlled circuit.

15.2 The Precursor to OCTOPUS The OCTOPUS modules to be presented in subsequent chapters started as an unnamed and unobtrusive project when Vingron [1975] presented his first reusable modules for transition controlled circuits at the Cranfield Fluidics Conference in Stockholm, Sweden. The design, see Figure 15.1, consisted of two standard modules: The Sequential Chain to enforce the required sequence of operations or steps of the transition controlled process, and the Central Control Unit to manually control and intervene in the process. But let us start by describing the process being automated. The task is to fetch, move, and deposit objects. Cylinder A when moving outward and back again (symbolically, .A+, .A−) fetches an object. Cylinder B, by moving outward (.B+), moves cylinder A together with the object to a new location. There, cylinder A by again moving outward and back again (.A+, .A−) this time deposits the object after which cylinder B retracts (.B−) thereby moving cylinder A back to its starting position. From a control point of view, one complete cycle of the process is described by the sequence of commands the circuit (the PLC) has to send to the actuators, i.e., by .A+, A−, B+, A+, A−, B−. For the control engineer the aforesaid about fetching, moving, and depositing is mere padding. Now to the general layout — the importance of which lies in the fact that it is basically the same for all transition controlled circuits: It consists of two modules, the Central Control Unit (CCU), and the Sequential Chain that are used without any modifications in all circuits. The actuators are not part of the logic circuit. The simplest part of the circuit is the Sequential Chain which consists of identical memory circuits connected serially, each activating the memory to its right. The output signals of the Sequential Chain are accumulating output signals,

1 For example, take the back and forth motion of a pneumatic cylinder to be a sub-process – note, a sub-process, not an elementary sub-process – then the motion of the cylinder’s piston from one limit position to another is taken to be an elementary sub-process. 2 In the example of the pneumatic cylinder, the activation of a limit switch marking the end of a stroke creates the required transition signal. 3 Think, e.g., of cylinders, motors, heating devices, fans, or whatever.

182

15 Historical Remarks and Programming Overview

Fig. 15.1 The first reusable modules of what was to become OCTOPUS

15.2 The Precursor to OCTOPUS

183

meaning they are sustained or level outputs that once set (of course successively from left to right, as is the only way possible) remain set to the end of the cycle – at which point they are reset. The output signals being level output signals can be used in switching algebraic calculations of the logic connecting the Sequential Chain to the actuators. This aspect proved to be of fundamental importance in the later development of what came to be called the Actuator Control Units of an OCTOPUS circuit. The logic connecting the Sequential Chain to the actuators is characteristic for the task being solved and for the actuators being used. The inputs to the Sequential Chain are the transition signals, these in our example being the signals from the limit switches. For instance, if stage 1 of the Chain lets cylinder A move outward, then the conclusion of this motion is registered by the limit switch .a+ this transition signal setting the second stage of the Chain. To understand the circuit of the Central Control Unit, concentrate first on the INHIBITION whose output is called Main, the signal that always feeds the first stage of the Sequential Chain. The non-inverted input of the INHIBITION comes via a triple OR which accepts the signal from a push button that starts a single cycle. The output of the first stage of the Chain is fed back to the CCU where it is led back to the INHIBITION creating a closed loop (a memory circuit)4 . This closed loop powers the Chain allowing a full cycle to complete. The output of the last stage of the Chain is sent back to the CCU where it, together with the transition signal (.b−), the last transition signal of the cycle, ultimately produced, is passed on via an OR gate to block the output of the INHIBITION thereby resetting the Sequential Chain. The feedback from the just mentioned OR gate only then creates a closed loop when the push button that starts the single cycle remains pressed when the cycle is supposed to end thus forcing the signal Main to become 0. The sub-circuit of the CCU used to start repeated cycles is now quite easy to follow. It consists of a reset-dominant latch that is set by a push button signal to start repeated cycles, and reset by a latch whose setting signal comes from a ‘stop at end of cycle’ or ‘start a single cycle’ button, these signals always being passed on to its output, but otherwise is only activated when the ‘start repeated cycles’ signal is present. The output of the reset-dominated latch is used to make Main permanently 1 , except when this signal is briefly interrupted to reset the Sequential Chain. In the late 1960s, when the concept to the circuit organization shown in Figure 15.1 was developed, it was not yet common to employ sequential chains (of whatever design), let alone did anyone imagine the possibility of an universal control unit. But the actual achievements were miniscule compared with the 11 tasks listed in section 15.1. These only started to be addressed with the development of the OCTOPUS modules and associated circuit organization which we take a first look at in the following section.

4 If you want the cycle to run autonomously only after, say, the second stage (not after the first), the output of the second stage, instead of the first, is fed back to the CCU. The push button to start the cycle must then remain pressed until the second stage is activated.

184

15 Historical Remarks and Programming Overview

Fig. 15.2 A basic but complete OCTOPUS circuit

15.3 Basic Modules of an OCTOPUS Circuit The concept of the OCTOPUS circuit — see the example of Figure 15.2 — represents a quantum leap compared to the circuit design of Figure 15.1. But first consider the similarities: We still have a central control unit (now called SCV unit for Status Control and Visualization) and the sequential chain the outputs of which still govern the actuators albeit indirectly. Also, the basic circuits of the CCU and Sequential Chain of Figure 15.1 are retained. All modules are designed using only AND, OR and NOT gates so that they can be seen as an extension to the Function Block Diagrams (FBD) language. But that is as far as the similarity goes. The prime interest lies of course with what could be called the main brain of the circuit — the Status Control and Visualization unit which is detailed in chapter 19. First note the relatively large number of 10 manual inputs on the left of the SCV unit, each allowing a certain standardized intervention to the process. The actual state of the process is highlighted by pilot lights fed from the outputs concentrated on the right side of the SCV unit. The input and output leads drawn at the base of the SCV unit connect it to the other modules of the OCTOPUS circuit. As this section is only intended to provide an overview of an OCTOPUS circuit, we leave the precise

15.4 Circuit Development using Templates

185

definitions of the manual interventions and the status visualization for a later and necessarily detailed review. The Sequential Chain — which is detailed in chapter 17 — used in an OCTOPUS system is basically that of Figure 15.1 but the memory circuits are designed to be fail-safe, and each stage takes on further tasks such as • ensuring that the chain is interrupted prior to a ‘stuck-at-1’ transition signal, • enabling interrupting the connection from the chain to the actuator to enable operating the actuators manually, • enabling stepping through successive stages of the chain manually, thereby causing the overall logic to grow markedly. The actual breakthrough achieved with OCTOPUS is due to the modules called Actuator Control Units (ACU). Each actuator is controlled by an ACU, and each ACU is designed to control a specific type of actuator. The tasks of an ACU are complex and diverse and are discussed in great detail in chapter 20. In principle these tasks consist of • • • •

handling manual mode of operation, switching between manual and automatic, handling the emegency-stop behaviour of the actuator, and generating a warning signal for the false status of an actuator when operating it manually.

There are some further modules to the OCTOPUS system but these are introduced in an appropriate context as we go along.

15.4 Circuit Development using Templates To use OCTOPUS you need to have its modules installed on a development or programming platform installed on a computer as touched on in the next chapter. The easiest way to design a simple circuit for a linear (non-branched) sequential chain is to use a template adapted to the number of actuators of your problem. Such a template for two actuators is shown in Figure 15.3. The template always has a SCV-Unit connected to a Sequential Chain. The chain can be as long as needed by appending further chains. The last output needed to realise a given task is fed back to the SCV-Unit. The simple template shown in Figure 15.3 gives the possibility to use two actuators (the green modules) together with their Actuator Control Units. Should the suggested actuators and ACUs not be appropriate for your task, you can replace them by modules of your choice by drag and drop. Importantly, the output of an actuator is always a male connector that feeds into the female connector of the associated actuator module. The use of connectors is necessary so that all actuator modules of a finished circuit can be deleted before the circuit is compiled and loaded into a PLC. The labels of the male connectors of the ACUs are then used as outputs to the physical actuators to the process. The default variable names for the actuators

186

15 Historical Remarks and Programming Overview

Fig. 15.3 An elementary template fo an OCTOPUS circuit

and their associated ACUs are an enormous help but of course can be overwritten by names of your choice. To adapt the template to the given task (written, in our example, in the top right quadrant of Figure 15.4) it is good practice to label the outputs of the sequential chain according to the cycle specified in the task. Note that the transition signal produced by the output of a given stage is always assigned to the inputs of the following stage of the chain (the first stage is taken to be the following stage of the last stage). The final step in the design is to develop the characteristic logic, the logic connecting the sequential chain with the ACUs of the actuators as shown in Figure 15.5). This is usually done intuitively, i.e., without falling back on switching algebra. For our example the argument could go as follows (although an experienced circuit designer won’t need the explanations). Cylinder A is twice required to move outward and back again. Being an identity actuator the signals needed to cause and maintain this movement must be sustained signals, as are, conveniently, provided by the sequential chain. When stage 1 is set, its output is passed via an INHIBITION and an OR, to the ACU of cylinder A to make A move outward. When it reaches its outward position (marked by .a+), stage 2 is set, blocking the INHIBITION’s output and forcing cylinder A to retract. The second back and forth motion of cylinder A is caused by stages 4 and 5 of the sequential chain.

15.4 Circuit Development using Templates

Fig. 15.4 Adapting the template to the given task.

Fig. 15.5 Designing the characteristic logic.

187

188

15 Historical Remarks and Programming Overview

Cylinder B is a memory actuator. Each input to a memory actuator needs to be blocked or reset before the other input can be set explaining the need for the INHIBITION being fed by stage 3 of the chain. The output from stage 6 to the ACU of cylinder B need not be blocked as this signal is automatically returned to 0 when the chain is reset at the end of the cycle. Once the circuit has been completed, it is good practice to test and simulate its behaviour off line using the NEXT STEP signal of of the SCV.

15.5 Distributed Intelligence of OCTOPUS Distributing the circuit’s intelligence between a central module (the SCV) and peripheral modules (the ACUs) made it possible to concentrate all control tasks universal to the control process as such in the central module while delegating all tasks relevant only to single actuators to their specific (the peripheral, the ACU) modules. By this partitioning of tasks it was not only possible to design a SCV module universally applicable for all transition controlled circuits, the same became possible for the ACU of any given type of actuator. This replicates the distributed intelligence of an octopus’s brains where the octopus as a whole is governed by its main brain but the arms of the octopus (in our analogy, the actuators) work autonomously to a high degree.

Chapter 16

Designing and Testing on a Computer

Today’s state of the art of designing circuits for PLCs is to do so using a (personal) computer equipped with a program — called a development or programming platform — to handle and graphically display the symbols of the (main) programming languages of IEC-61161-3 together with all necessary connecting leads. These programs are also capable of testing designed circuits. Testing a complete PLC solution on the development platform of a computer without being connected physically to the PLC, the actuators governing the process, or the actual transition signals produced by the process is called off-line testing. This is challenging because the behaviour of the physical hardware such as actuators must be modelled by logic, timers and counters so that these models can be integrated into the development platform. This chapter gives a quick look at a development platform and touches on modelling two very common actuators.

16.1 Development Platform logi.CAD OCTOPUS was developed on programming platforms of the logi.CAD 5.x series of the Austrian company logi.cals GmbH. The screenshot shown in Figure 16.1 represents the circuit of Figure 15.2. But, while in figure Figure 15.2 the actuators represent hardware realizations, Figure 16.1 contains models of the actuators compatible with the programming platform. These Actuator Modules are given a different colour (in our case, green) to distinguish them from the actual OCTOPUS modules which we depict in the colour gold. The reason for this distinction is, before compiling the developed and tested circuit and loading it into a PLC, the Actuator Modules must be deleted. To this end, you will always see Actuator Control Units connected to the actuators (more precisely: to the models of the actuators) via socalled connectors whose signals, after (manually) deleting the Actuator Modules, function as output signals of the PLC, the output signals that connect to the real actuators of the process. The advantage of this approach and procedure is that every © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_16

189

190

16 Designing and Testing on a Computer

Fig. 16.1 Screenshot of an OCTOPUS circuit on the development platform of logi.CAD

circuit developed and tested off-line on the development platform of a computer will work in identical fashion on a programmable logic controller, the PLC, connected to the real world process. All modules are stored and governed by the development program, and are positioned on screen by drag and drop. Figure 16.2 gives you an impression how modules were developed (or at least drawn) using logi.CAD 5.x. The Function Block Diagram is drawn on the main screen while its symbolic representation is developed on the smaller right-hand screen. In the example shown, the main screen contains only logic gates but can, in general, contain any symbol of the Function Block Diagram (FDB) language allowed by the standard IEC-61131-3.

16.2 Modelling Actuators Linear motion is quite commonly realized with pneumatic cylinders. The motion of a cylinder’s piston is governed by an electro-pneumatic valve while the piston’s (stationary) positions are detected by limit switches. The cylinder, the valve, and the limit switches, taken together as a unit, are what we refer to as an actuator. Obviously, physical actuators cannot be incorporated within a development platform, here only a model of an actuator consisting of logic gates, timers, counters, and other device models defined in IEC-61131-3 can be used.

16.2 Modelling Actuators

191

Fig. 16.2 Screenshot of a Sequential Chain of four stages

The analysis of the linear motion and the produced commands of the limit switches are put forth in Figure 16.3 to which we now turn. The electro-pneumatic valve is not considered at this stage, which is why it is faded out in sub-figure a, allowing us, in a next stage, to consider valves of different function. Sub-figure b shows a timing diagram for the output .Z> of the actuator: at .t1 the piston is given the command to move outward, while at .t2 it is given the command to retract. The actual motion of the piston is idealised in sub-figure c. The time the piston takes for its outward stroke is .out t while the inward stroke takes .in t which can be longer or shorter than the outward stroke. These time spans are reflected by the signals .z+> and .z−>, respectively, of the limit switches as shown in sub-figures e and d. The signal .z−> of the inner limit switch (sub-figures d) is assumed to take .z− t to drop off, but to simplify the model without loosing touch to reality, we take the activation of .z−> to mark the successful completion of the inward stroke. In a like manner sub-figure e shows that we take the activation of .z+> to mark the outward stroke and that its dropping off occurs time delayed by the delay time .z+ t with respect to the moment the piston starts its inward motion. We can now model our first actuator, the one shown in Figure 16.4a. The valve on its own, represents a device whose output signal .Z> is always identical with its input signal .>Z , so from the point of view of logic, the valve is an identity (allowing us to call the actuator as such, an identity actuator). In the circuit of sub-figure c this identity is represented by the lead connecting the input .>Z with the output .Z>.

192

16 Designing and Testing on a Computer

Fig. 16.3 Modelling the cylinder’s motion of pneumatic actuator

To model the motion of the piston, we refer to Figure 16.3. The outward motion of the piston is modelled in sub-figure e as the dependence of the signal .z+> on the output signal .>Z of sub-figure b. The delays .out t and .z+ t of .z+> with respect to .>Z are realised by serially connecting the two top time delays TON and TOF as shown in Figure 16.4c. Returning to Figure 16.3, the inward motion of the piston is modelled in subfigure d as the dependence of the signal .z− > on the output signal .> Z of sub-

16.2 Modelling Actuators

193

Fig. 16.4 Developing the model of the pneumatic identity actuator

figure b. To describe this dependence, think of .>Z in sub-figure b as being inverted allowing us again to express .z−> as serially connected timers TON followed by TOF, but this time using the inverted signal .¬(>Z) as input signal to TON as shown in Figure 16.4. To summarise: The identity actuator pictured in Figure 16.4a is modelled by the circuit sub-figure c, and it is this circuit that is represented by the actuator model of sub-figure b. Modelling the actuator of Figure 16.5a is a simple matter as we need only modify the model of Figure 16.5 to account for the valve which in this case is a memory valve, giving the actuator as such the name memory actuator. The logic circuit equivalent to the memory valve is calculated in Figure 16.6 and is used in Figure 16.5c to replace the lead representing the identity valve in Figure 16.4b. The circuit of Figure 16.5c is represented by the actuator model of sub-figure b.

194

16 Designing and Testing on a Computer

Fig. 16.5 Developing the model of the pneumatic memory actuator

Fig. 16.6 Developing the logic feedback circuit of the pneumatic 4/2-valve, the pneumatic spooland-sleeve valve for memory

Chapter 17

Sequential Function Chart and Sequential Chain

For what it is worth: The sequential Function Chart is a simplified one-hot ring counter having an intrinsic feedback from the output to the input while the Sequential Chain is better described as a simplified shift register, it having no intrinsic feedback, rather only an external one (via its SCV unit). Further, the outputs of the SFC can only become 1 successively, remaining 1 till the next output becomes 1. The outputs of the Sequential Chain also become 1 successively, but having become 1 remain so until all are reset (usually at the end of the cycle). Of more practical and direct use is to see how the circuitry of these chains is designed. Notably, the respective circuits are not tied to a PLC, they can be implemented in any (asynchronous) technology.

17.1 Sequential Function Chart The Sequential Function Chart is symbolically expressed as in Figure 17.1a. Each rectangle S0, S1, S2, etc., is a step or stage of the chart and each creates a binary output signal when activated or set. The output signal is called an action because it starts some action of the process. The output signal of a stage is also passed on to the following stage via an element called a transition condition, the black bar parting two stages. The transition condition also accepts an external transition signal that, when present, passes the output of the prior stage on to the next stage thus setting it. Stages can therefore only be activated or set successively. This chain, the SFC, has one property not explicitly pictured in its graphical representation: Whenever a stage is set, it resets its prior stage. For the whole process to work, you always need to feed the output of the last transition condition back to the initial stage to set it. The initial stage is now the successor to the last stage which it resets when itself having been set. The chain, as explained above, works beautifully — once it works. But even in the symbolic representation it is clear that to start the chain functioning, the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_17

195

196

17 Sequential Function Chart and Sequential Chain

Fig. 17.1 The symbolic representation of a Function Chart and its logic design

output signal of the initial stage (passed on internally to the first transition condition) must be 1. When switching the SFC on, i.e., powering it up for the first time, the initial stage must be set, while all other stages must be reset. Apart from this minor difference all stages function identically: setting the subsequent and resetting the prior stage. The actual circuitry required to realize a SFC is shown in Figure 17.1b. It is easy to recognize that each stage in sub-figure a corresponds to a latch in subfigure b. These latches can be either set-dominant or reset-dominant. Each transition condition (the bar parting two stages) of sub-figure a corresponds to an AND gate that sets a following latch. But the thing to note is the minute difference of the initial latch (which has the word ‘one’ inscribed). Although the dynamic operation of this latch is identical to all others, it differs in its behaviour when being powered up: it is set. In general, the output values of an Eccles-Jordan latch, when being powered up with its input signals comprising a memorizing input event, are subject to a race and

17.1 Sequential Function Chart

197

Fig. 17.2 Showing how to design a feedback latch whose output is 1 when being powered up with its input signals comprising a memorising input event

are thus undefined. But Eccles-Jordan latches are not the latches used in the circuit of sub-figure b. The latches used are single output latches calculated by one of the general memory evaluation-formulas (9.9). Latches thus designed always have a 0output on being powered up with their inputs comprising a memorizing input event. For this reason all latches of sub-figure b (other than the initial latch) have 0-outputs (this even being enforced for the last latch by it being reset). The common but rather dis-ingenious way to force a 1-output on powering up is to simultaneously create a 1-pulse that is sent (via an OR gate) to the setting input of the latch. An alternative and presumably unique solution is shown in Figure 17.2. Let us assume the latch of the symbol of sub-figure a to be dominantly resetting, then its input-output behaviour is expressed by the K-map of c. The evaluation — .Q ⇔ S(R ∨ Q) — leads to circuit f. This latch, when being powered up with both inputs at 0, has a 0-output meaning its output is reset. Inverting the latch’s function is expressed in K-map d, the evaluation of which is .Q ⇔ S ∨ R Q, the graphical representation shown in g. Notably, the output .Q of this latch is reset on being powered up. Inverting g again, this time algebraically — .Q ⇔ S ∨ R Q — leads to the circuit in h. This latch behaves identically to the one in f, but with the very notable difference that it is set on being powered up with both inputs at 0. This is the circuit represented by the symbol pictured in b. Introducing the SFC as a standard had the indisputable advantage of giving the average engineer a tool to easily organize the succession of actions a process

198

17 Sequential Function Chart and Sequential Chain

required. But the SFC in its basic form falls short concerning safety and when manually intervening with the process, as is easy to see. Using the example .A+, A−, B+, A+, A−, B− shown in Figure 17.1 assume a limit switch, say .b+, to be, unbeknown to the operator, stuck-at-1 meaning its output has become permanently 1. Note that without any prior warning stage 3, the stage that would emit the command .B+, is skipped meaning the actual cycle would degrade to .A+, A−, A+, A− where cylinder B is not activated at all. Clearly, this possible malfunction needs to be addressed by any programmer using a SFC. Most importantly, solving all questions related to manually intervening in a process governed by SFC are left to the programmer using the SFCs. The solutions a programmer comes up with are usually specific to the problem at hand. Addressing this deficiency, i.e., developing a central control unit universally applicable to transition controlled processes, is a main concern of the OCTOPUS system of modules.

17.2 OCTOPUS’S Sequential Chain In its basic form the Sequential Chain is exceedingly simple, as shown in Figure 15.1, each stage consisting only of a latch. But when comparing a stage of the basic chain with the stage of the chain actually used — see Figure 17.3 — you will notice a marked evolutionary advancement. To understand how the circuit works consider its sub-circuits from the output on the right to the input on the left. The first sub-circuit is the all important latch comprising a stage of the basic Sequential Chain, the only difference being that the original latch has been redesigned to be fail-safe. It is of course irrelevant if the fail-safe latch is designed as a disjunctive or a conjunctive latch. But it is very important to make this latch fail-safe.

Fig. 17.3 The circuitry of a single stage of a Sequential Chain

17.3 Multi-Path Programs

199

The circuit feeding the output latch is used to interrupt the Sequential Chain by blocking the output of the stage: When the interrupt signal is 1, no input signal to the chain can be passed on to the output latch. On the other hand a stepping signal can now cause the output latch of this but only this stage to be set if it is the first one following the last stage set. The stepping circuit used to achieve this is the Priority-AND of Figure 14.3. The final sub-circuit, the inhibition at the input to the stage, ensures what could be called transitional security ensuring that the input signal (which is the transition signal that activates the stage) only sets the output latch if it is present before the next stage is set.This ensures that a stage is always interrupted in the presents of a stuck-at-1 transition signal. Summarizing, the Sequential Chain of OCTOPUS has the built-in ability to interrupt a cycle, to manually and successively step through a cycle, and to make sure that a cycle is interrupted at a stuck-at-1 transition signal.

17.3 Multi-Path Programs Transition controlled processes can vary markedly in their required reaction to changing influences from the environment. For instance it might be necessary to sort items by their size, or weight, or other criteria, processing some items differently than others. Given certain measurements of an item’s property, you might want to skip specific steps in further processing the item, or you might want to repeat certain parts of its processing. These considerations lead to so-called multi-path processes and programs. The varying requirements are covered in the way a number of sequential chains are organised, the basic structures being shown in Figure 17.4. The case of two simultaneous paths shown in sub-figure a would be used when two different operations can be carried out at the same time (i.e., simultaneously) on one and the same item, for instance, having two robots position electronic parts on a circuit board. When both robots have finished their jobs (when robot 1 AND robot 2 has finished), the circuit board is moved on. In the case of two alternative paths, shown in sub-figure b, you would want to handle two different types of items, each in its own specific way. For instance, two different types of circuit boards, each type handled differently. The circuit board could be chosen manually or determined automatically. It should be pointed out that sub-figure b can also be used to skip a sub-cycle: If the bottom path were to contain no sequential chain, then by choosing this (empty) path, the sub-cycle of the top path would be skipped. Sub-figure c indicates how to repeat a given sub-cycle as long as a repetition signal r is 1. This signal could be generated manually, but usually would would come from a counter. In the next two sections we consider the cases of the sub-figures b and c in more detail.

200

17 Sequential Function Chart and Sequential Chain

Fig. 17.4 Basic layouts for multi-path transition controlled circuits.

Fig. 17.5 The Choose module for alternative paths.

17.4 Choosing Alternative Paths The basic circuit by which to choose alternative paths is a switch consisting of an AND and an INHIBITION as shown in Figure 17.4b. But it is prudent to expand the switch circuit, as shown in Figure 17.5a, to ensure that a path, once chosen, cannot

17.5 Repeating Sub-Cycles

201

be changed by altering the value of r until the full cycle has been completed, at which point all chains are reset and a new value of r can take effect. The circuitry contained in the OCTOPUS module choose of Figure 17.5b is of course that of sub-figure a. The principle layout of a circuit with alternative paths is shown in Figure 17.6. Chain 1 is of course connected to the Status Control and Visualisation (the SCV) module (not drawn). The output MAIN of chain 1 is passed on to the Choose module which passes the signal MAIN on to either chain 2 of chain 3 depending on the value of r. The LOCK signals of these chains are sent back to the Choose module which passes (only) the LOCK signal of the chosen chain back to chain 1. Importantly, the INTERRUPT and STEP signals are not effected by the choose option so that these signals are passed on to both alternative chains. As the conceptually basic circuit of Figure 17.4b indicates, the output signals of the alternative chains are connected by an OR before being passed on to the next chain. This requirement leads to the OR-battery which passes the output signals of the active alternative chain on to chain 4. Finally, you need to ensure that chain 4 is started (or activated) by the correct (and only the correct) transition signal. This is guaranteed by the AND-to-OR circuit that activates the first stage of chain 4.

17.5 Repeating Sub-Cycles The very simple example pictured in Figure 17.7 shows a circuit to realise the full cycle .A+, .B+, .B−, .A− with the possibility of repeating the sub-cycle .B+, .B− as long as r is 0. First of all, when the repeat signal r is 0 both INHIBITIONS allow the MAIN signal to pass from one chain to another uninhibitedly, ensuring that the circuit runs the full cycle. The transition signals of the sub-cycle come into play when the repeat signal r is 1. In principle, the transition signals for the sequence to be repeated are AND-ed which only has an effect for the first input to repetition chain, chain 2. It should be noted that the relative position of the sub-cycle within full cycle is of importance. The case shown in Figure 17.7 is for a so-called medial sub-cycle where the repeated sub-cycle is neither at the beginning nor at the end of the full cycle. An example for a circuit with a leading sub-cycle to be repeated is the full cycle .A+, .A−, .B+, .B− whose leading sub-cycle .A+, .A− is to be repeated. An example with an ending sub-cycle is .A+, .A−, .B+, .B− with the sub-cycle .B+, .B− to be repeated. These examples I put trustingly in your hands.

202

17 Sequential Function Chart and Sequential Chain

Fig. 17.6 The principle design of an alternative path circuit.

17.5 Repeating Sub-Cycles

203

Fig. 17.7 An example showing how the transition signals of the full and sub-cycle of a circuit with a repeating sub-cycle work together.

Chapter 18

Operating a Transition Controlled Process

There are a number of operations you must be able to carry out when controlling an automated process. First and foremost you must have the possibility to run the process automatically or manually. Running a process in either of these modes is usually not a problem, but switching between them in a safe way is always a big problem. Another general requirement is to be able to bring a running process to a standstill. An orderly and easily achievable way to do so is at the end of a cycle. But bringing the process to a standstill before it reaches the end of a cycle poses the question: Is it merely being interrupted, or is the process being stopped due to an emergency. In this latter case you will be confronted with a host of critical questions and decisions that depend on the momentary state of the process. An aspect of greatest importance is to present the operator of a process with the information of the state the process is in, without which no decisions as to further interventions can be made. The above mentioned problems have a general solution when considering transition controlled processes. These solutions are organised in OCTOPUS’S Status Control and Visualisation (SCV) module. The operations that control the process aren’t actually carried out by the SCV. It passes the commands on to the Sequential Chain and to the Actuator Control Units, only taking care that no mutually contradictory commands occur, or, to put it differently, only valid transitions from one input command to another are allowed.

18.1 Input Commands and State Visualisation All OCTOPUS circuits are, in principle, organised as in Figure 15.2 or Figure 16.1. Although this chapter is dedicated to the Status Control and Visualisation (SCV) module, this module only functions as explained here when connected to the Sequential Chain, which itself is connected to the Actuator Control Units, which, in turn, are connected with the Sequential Chain, the Actuator Models, and the SCV. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_18

205

206

18 Operating a Transition Controlled Process

Furthermore, the input commands in Figure 15.2 are pictured as being produced by push-buttons although they can equally well stem form other processes. The state the circuit is in is documented by the appropriate pilot light being switched on. We now turn to the State Visualisation Graph of Figure 18.1. Its nodes, by and large, represent the states expressed by the pilot lights but, as you can see, there are more nodes than pilot lights, meaning, not all internal states of the circuit are uniquely associated with a pilot light. For instance, both internal nodes marked D cause the pilot light ‘cycle interrupted’ to be switched on but the circuit reacts differently depending on when the cycle was interrupted. Similarly, if both pilot lights F and G are on (i.e., when the circuit is in internal node .F + G), the circuit reacts in a different way than it would when only the pilot light F , or only pilot light G, were on. A directed edge, pointing from one internal node (the ‘spring’) to another (the ‘trough’), carries the number associated with the input command responsible for this change of internal states from spring to trough. For instance, the directed edge pointing from A to B says that when the circuit is in its ‘ready’ state, the input signal ‘single cycle’ puts the circuit in the ‘single cycle running’ mode. Those edges that emanate from an internal node to which they loop back stand for input signals that do not cause the present internal node to change. For instance, with the circuit being in its ‘ready’ state, entering any of the commands ‘end cycle’, ‘emergency stop’, ‘reset’, ‘manual mode’, ‘end manual mode’, ‘continue cycle’, and ‘next step’ will have no effect on the circuit. This is not necessarily intuitive in all cases, and does need some justification. For instance, it is of course valid to ask why it isn’t possible in OCTOPUS to switch directly from ‘ready mode’ to ‘manual mode’. The reason for this special situation is a bit intricate but we will come back to manual mode in a later section.

18.2 Automatic Mode of Operation The minimal set of commands always needed are those that comprise the automatic mode of operation. These are the commands SINGLE CYCLE, to start one and only one cycle, REPEATED CYCLES, to start a process and have it repeat cycle after cycle, and END CYCLE, which brings a running process to a standstill at the end of the cycle. If automatic mode is all that is needed, then the only push-buttons and pilot lights required are those for the connections shown for SCV of Figure 18.2. The visualistion graph for this simple situation is shown in sub-figure b. But there is one aspect this graph does not cover, but which is best explained using an example. Assume we start a process that is in its ready state (state A) by entering a permanently present REPEATED CYCLES command. To bring the process to a standstill, we must enter a SINGLE CYCLE or END CYCLE command. The question is: Will any of these signals override the still present REPEATED CYCLES command? The answer is categorically yes, because the circuitry of the SCV unit

18.2 Automatic Mode of Operation

207

Fig. 18.1 The state visualisation graph of OCTOPUS

ensures that the command of an edge leading from one node to another always dominates the command of an edge that loops back to its spring.

208

18 Operating a Transition Controlled Process

Fig. 18.2 A partial state visualisation graph for automatic mode of operation

When repeated cycles are started, how do you get the process back to its ready state? The only way is to switch to the single cycle mode (node B) which automatically after terminating (this is the meaning of the edge marked auto) puts the circuit back into its ready state (node A). It should be pointed out terminating a process running repeated cycles by entering an END CYCLE command has the same effect as entering a SINGLE CYCLE command, that is, the process is returned to single cycle mode. This means that the END CYCLE command (or push-button) is not needed.

18.3 Interrupt, Continue, Next Step A simple but frequently required operation is to be able to interrupt a running cycle. This is often connected with the need to later be able to continue operation at the interrupted point, or to step through the remaining cycle (or parts of it) by manually starting one sub-process after another but under control of the circuit. In practical terms, using OCTOPUS, only the push-buttons indicated on the left of the SCV unit of Figure 18.3 are needed to initiate the appropriate commands, while

18.3 Interrupt, Continue, Next Step

209

Fig. 18.3 The partial state visualisation graph for interrupt mode of operation

the only added pilot light is that for CYCLE INTERRUPTED to indicate the new state. If you start a next elementary process by entering a NEXT STEP command, the circuit remains in interrupted mode so that the pilot light simply remains on. But if you continue the cycle with a CONTINUE CYCLE command, the program switches to single cycle mode as pictured by the edge from D to B. Should you want the process to continue in repeated cycles mode, you need to enter REPEATED

210

18 Operating a Transition Controlled Process

CYCLES after having entered CONTINUE CYCLE. Do note that in node A (the READY state) the push-buttons for CONTINUE CYCLE and NEXT STEP have no effect, meaning, you can’t start the process with these buttons when it is in its READY state. Having taken this design decision, it was still necessary to provide the possibility to step through a cycle beginning with the first elementary process or stage. This is often needed to adjust (or readjust) limit switches that provide the transition signals. To be consistent with the operational steps taken while a process is running, the decision was taken that, when the circuit is in its READY mode, the NEXT STEP button should only be effective after having entered an INTERRUPT CYCLE command.

18.4 Manual Mode and False Transition Signal Manual mode for a running process can only be activated or entered if the circuit has been brought to a standstill, commonly by an interrupt signal, or else by an emergency stop signal. Bringing the circuit to a standstill always means interrupting the Sequential Chain which refers to those of its outputs that are set remaining set, while no further output can be set. But to be able to realise the manual mode of operation it was imperative to introduce the concept of Actuator Control Units (ACUs). These circuits are so fundamentally important and basic to the OCTOPUS concept that we cover them in detail in a dedicated chapter. At this point let us merely note that • Only those actuators can be operated manually that are connected to ACUs, these providing connections for push-button signals to operate the actuators after MANUAL MODE has been emitted. • A MANUAL MODE input accepted by the SCV interrupts all signal flow from the Sequential Chain to the ACUs. As these actuators are now, anyway, stationary or inactive in this situation, disjoining the Sequential Chain from the ACUs never causes a problem. • A pilot light called FALSE TRANSITION SIGNAL (and unique to OCTOPUS) is switched on when any actuator is, say, in a false position. This situation can and often does occur due to an EMERGENCY STOP without any manual intervention in the process having occurred, This pilot light does not indicate which transition signal or limit switch is the perpetrator. • To reconnect any actuator via its ACU with the Sequential Chain, the status of the actuator must coincide with the status required by the interrupted chain. To this end, you need to ensure that each actuator is in its correct position or status, if necessary, ensuring this manually. All actuators are in their correct state if the pilot light FALSE TRANSITION SIGNAL is OFF. • Special precautions are taken to safely reconnect the ACUs of identity actuators with the Sequential Chain. This is discussed in the chapter covering the design of ACUs.

18.4 Manual Mode and False Transition Signal

211

Fig. 18.4 The partial state visualisation graph for Manual Mode, and End Manual Mode

Finally, take a look at Figure 18.4 where apart from the inputs for Automatic Mode of operation (SINGLE CYCLE, REPEATED CYCLES, and END CYCLE) only three further inputs are connected to the SCV. INTERRUPT CYCLE is used to bring the circuit to a standstill after which MANUAL MODE can be entered which can only be ended if the FALSE TRANSITION SIGNAL is off, and END MANUAL MODE is entered. The only pilot lights needed to indicate the state of

212

18 Operating a Transition Controlled Process

the circuit are CYCLE INTERRUPTED, and MANUAL MODE ACTIVE. When manual mode is ended, the pilot light CYCLE INTERRUPTED is switched on.

18.5 Emergency Stop An Emergency Stop refers to bringing a process to a premature standstill in an emergency under the constraint of preserving the life and health of personnel, or averting damage to the machine or process (the former unconditionally having priority over the latter). Never should the energy supply to the controlling circuit be cut off, as this would make any intervention with the process impossible. A catalogue or manual of pre-established actions, actions to take in the event of specific emergencies is always helpful, but the personnel needs to be well trained to act appropriately. To alleviate this problem of training, OCTOPUS uniquely provides a possibility to automate the actions of such a catalogue of emergency actions. We discuss the simplest case in this section. When an emergency occurs there is always an elementary sub-process momentarily running, meaning, e.g., a piston is in motion, a motor is running, a cooling system is cooling, a pump is pumping. etc. To simplify the problem for the moment, assume all actuators, other than the momentarily active one, to be inactive, meaning that all prior sub-processes that have thus far been started have terminated (which in fact is commonly the case). Without pressing reason, these now static or stationary actuators should remain inactive. Such a pressing reason for an inactive actuator to become active, could occur in a chemical plant, where, in an emergency, it might be necessary to block the flow of a dangerous gas by closing an otherwise open valve. For this situation we define a categorical safety position for an actuator. • Categorical Safety Position (CP): An actuator, whether in an active state (e.g., in motion) or stationary is sent to (or remains in) a safety state or position. All other actuators that are inactive when an emergency occurs, meaning those for which there is no pressing reason to become active, are required to remain static or inactive. But if the emergency stop command is given or occurs while any of these actuators are active (e.g., in motion) then we want to make them act in one of the three following ways (of course within the laws of physics): • Reverse Direction (RD): An actuator in motion reverses its direction returning to the state or limit position from which it started. An actuator when in a stationary state (e.g., in a limit position) remains stationary. • Go to Safety Position (SP): An Actuator in motion goes to a predefined state or position, a so-called safety position (if necessary, reversing its action or motion to do so). An actuator when in a stationary state (e.g., in a limit position) remains stationary (even if that state or limit position is not the safety state or position). • Immediate Stop (IS): An actuator in motion is ordered to stop immediately. An actuator when in a stationary state (e.g., in a limit position) remains stationary.

18.6 Reset

213

The emergency stop behaviour put forth above is forced onto each actuator by its Actuator Control Unit the moment the EMERGENCY STOP signal is emitted. The circuitry to achieve this behaviour is discussed in the chapter on ACUs. The design of an OCTOPUS circuit stipulates that every actuator is governed by an Actuator Control Unit. And as each ACU realises exactly one of the above emergency stop conditions, the engineer designing the control circuit must decide which of the possible ACUs to apply. On the other hand, by a simple drag-and-drop process on a development platform an original choice can easily be revised. The circuit of Figure 15.2 and Figure 16.1 give an example of the ACUs chosen for the required actuators. Taking a look at the Visualisation Graph of Figure 18.1 you will notice that an Emergency Stop signal (arrow 5) can take effect (bring the circuit to node E) whenever the circuit is in automatic mode (nodes B or C), but also when it is interrupted (any of the nodes D). One might validly ask: Why set off an Emergency Stop when the process has been interrupted? The answer is a question of timing: When an interrupt signal is given, the momentary elementary sub-process could still be running, and it is this elementary sub-process that needs to abide by the emergency stop command. In practice, emitting an EMERGENCY STOP signal is only the beginning of a very stressful situation its remedy being to intervene manually in the operation of the process or machine, after which you might want to RESET the circuit.

18.6 Reset Resetting a circuit is a drastic measure that usually (but not in the terminology used with OCTOPUS) means putting the circuit and the process as such back into its initial state. Doing this automatically, irrespective of the momentary state the process and circuit are in, is almost impossible. Therefore the term RESET in the context of OCTOPUS is restricted to mean resetting the Sequential Chain. How to do this safely, and what the consequences are, is discussed in the next chapter.

Chapter 19

The SCV-Module

This chapter puts forth the circuit of the Status Control and Visualisation module, the SCV-Unit, shown in previous chapters only as a ‘black box’. This is necessary information for those engineers who wish to program their own OCTOPUS system for test and research purposes. When developing your own OCTOPUS system, you might also want to develop a human-machine interface (an HMI interface). The logic (or circuit, or program) powering such an HMI display is of course the circuit of the SCV-Unit. But, for the engineer using an OCTOPUS system, the prime importance lies in the precise definition in plain prose of what the input commands do and how they interact. Although these properties cannot be guaranteed with theoretical certainty there is probably no other system for programming PLCs that has been tested so thoroughly giving you an exceedingly high degree of reliability and safety.

19.1 SCV-Circuit and Display Panel Layout The circuit of the SCV-unit of Figure 19.1 consists of two not quite independent parts. The central part (generating the outputs main, interrupt, step, FS, manual, reset, and stop) governs the status control and is the successor to the Central Control Unit of Figure 15.1. The other part of the circuit (with the outputs single cycle running, ready, repeated cycles running, cycle interrupted, emergency stop, manual active, and sequential chain reset) produces the information for the pilot lights that visualise the status of the process and circuit. The SCV circuit was the product of a lengthy trial and error process and only single parts of it were developed algebraically. To understand the functioning of the circuit, you might want to start with the Central Control Unit of Figure 15.1. The functionality of this circuit is completely imcorporated in the SCV circuit. But, admittedly, studying and understanding the expansions and newly incorporated interactions is challenging. They can only be © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_19

215

216

Fig. 19.1 The circuitry of the Status Command and Visualisation module.

19 The SCV-Module

19.2 Input Control Commands and Status Visualisation

217

Fig. 19.2 Possible layout for a command panel

understood within a fully functioning OCTOPUS circuit running on a development or programming platform of a computer (as mentioned, the one used to develop OCTOPUS was logi.CAD 5.x). For this reason a written text is not an adequate format to explain the intricacies of the SCV circuit. When using OCTOPUS on a PLC chances are you will want to use an HMI unit to create the input signal to the circuit and to display the status lights. The circuit to drive these functions is of course the SCV circuit of Figure 19.1. The displays you program on the HMI unit could center around those of Figure 19.2 and Figure 19.3.

19.2 Input Control Commands and Status Visualisation The commands an OCTOPUS circuit accepts, the effect the commands have (i.e., the status they put the circuit in), and how they interact is defined here together with pilot lights that display the status. These verbal definitions and the State Visualisation Graph in Figure 18.1 complement each other in some cases. In general, one should not trust a control circuit whose commands and command interactions are not well defined and tested.

218

19 The SCV-Module

Fig. 19.3 Possible layout for a status display panel

Single Cycle . A given sequence of output signals, e.g., {A+, A−, B+, A+, A−, B−}, is started and acted upon one by one until the last output signal is emitted. Once the sequence has been started by pressing the push button ‘Single Cycle’, pressing the push button again has no effect. Only after the sequence has terminated can it be started again by pressing the push button ‘Single Cycle’. The sequence does not start again automatically if the push button ‘Single Cycle’ remains pressed after the output sequence has terminated or the push button’s output signal is otherwise stuck at 1. Status: The status light ‘Single Cycle Running’ is turned on (as only status light) and the circuit is said to be in Single-Cycle Mode. When the sequence has terminated, the status light switches from ‘Single Cycle Running’ to ‘Ready’ indicating that the process is ready to be run again.

Repeat Cycles . The given sequence of output signals, e.g., {A+, A−, B+, A+, A−, B−}, is started and then repeated again and again until stopped. If the sequence of output signals was originally started in Single-Cycle Mode, pressing the push button ‘Repeat Cycles’ before the output sequence has terminated puts the circuit in Repeated-Cycles Mode letting the sequence be repeated indefinitely (until stopped). Status: The status light ‘Repeated Cycles Running’ is turned on (as only status light) and the circuit is said to be in Repeated-Cycles Mode.

19.2 Input Control Commands and Status Visualisation

219

End Cycle . If the output sequence is being repeated (again and again), pressing the push button ‘End Cycle’ will let the sequence terminate at the end of the current cycle. Note: The same effect is achieved by pressing the push button ‘Single Cycle’. Status: The status light switches from ‘Repeated Cycles Running’ to ‘Single Cycle Running’ indicating that the circuit is put into Single-Cycle Mode. When the sequence has terminated, the status light switches from ‘Single Cycle Running’ to ‘Ready’ indicating that the process is ready to be run again.

Interrupt Cycle . When the process is running (whether in Single Cycle Mode or in Repeated Cycle Mode), the signal ‘Interrupt Cycle’ prematurely halts the transition controlled process by allowing the current elementary sub-process to complete normally, but does not allow the next elementary sub-process to start automatically. The immediately observable effect of inputting an ‘Interrupt Cycle’ command is that all outputs of the Sequential Chain(s) are frozen meaning that those outputs which are set remain 1, while those that have not yet been set remain 0. When the process is in its initial state (i.e., when the pilot light ‘Ready’ is on), pressing the push button ‘Interrupt Cycle’ puts the circuit in interrupt mode (without starting the first stage of the Sequential Chain) enabling you to step through the process (see command ‘Next Step’) beginning with the first stage. Status: The status light ‘Cycle Interrupted’ is switched on as only status light. The only commands now accepted if the process was running, are ‘Manual Mode’, ‘Continue Cycle’, ‘Next Step’, and ‘Reset’. But if the process was in its initial state, the SCV-Unit will not accept a ‘Continue Cycle’ input, it will only allow you to enter a ‘Manual Mode’, ‘Next Step’, and ‘Reset’ input.

Emergency Stop . The ‘Emergency Stop’ signal only works individually for those actuators to which Actuator Control-Units are connected. The effect of the signal is to bring (or retain) the actuators into predefined states or positions, not necessarily allowing the current elementary sub-process to continue normally (as discussed in detail for each Actuator Control-Unit). Once these positions are reached, the next elementary sub-process is not allowed to start. When an ‘Emergency Stop’ signal is entered, all outputs of the Sequential Chain(s) are frozen (as with the command ‘Interrupt Cycle’) meaning that those outputs which are set remain 1, while those that have not yet been set remain 0. Status: The only status light switched on is now ‘Emergency Stop Active’. The only input commands the SCV-Unit will now accept are ‘Manual Mode’ and ‘Reset’.

Reset . You can only enter a ‘Reset’ signal after the process has been halted by either an ‘Interrupt Cycle’ or ‘Emergency Stop’ command. The primary effect of the ‘Reset’ signal is to reset the Sequential Chain making all its outputs 0, effectively deleting the chain’s memory and putting it into its initial state after having been powered up. Importantly, the indicator light ‘False Transition Signal’ is off and the whole logic governing this signal is now not functional and remains dysfunctional until the recovery procedure from ‘Reset’ has been successfully completed. The only input now accepted is ‘Manual Mode’ which will allow you to operate the actuators manually.

220

19 The SCV-Module

Status: The only status light switched on is ‘Sequential Chain Reset’. If you enter ‘Manual Mode’, the additional status light ‘Manual Mode Active’ will go on.

Recovering from ‘Reset’: The problem you will be confronted with after emitting a ‘Reset’ command is to bring the machine or process manually and safely back into its original state, and you will have to do so without any help from pilot lights or from the indicator light ‘False Transition Signal’. First. Enter ‘Manual Mode’ to operate the actuators in a succession required by the process to be recovered safely, finally bringing the actuators into their original or initial state. Second. Once this has been achieved, enter ‘End Manual Mode’. The circuit is put in the ‘Ready’ state finalising the recovery procedure. The pilot light ‘Ready’ is now the only one switched on. Beware of an Incorrect Initial State. When you enter ‘End Manual Mode’ to finalise the recovery process, the states in which the actuators happen to be are taken to be their initial states, these being the states the indicator light ‘False Transition Signal’ takes to be the correct initial state of the process. If you made a mistake defining the Initial State, you must successively enter “Interrupt Cycle’ then ‘Reset’ after which you can again run through the two steps of the recovery procedure described above.

Manual Mode . ‘Manual Mode’ can only be entered after halting the cycle either with an ‘Interrupt Cycle’, or an ‘Emergency Stop’ command, or after having entered a ‘Reset’ command. The effect is that the SCV-Unit sends a signal (called MANUAL) to all Actuator Control-Units disconnecting them from the output signals of the Characteristic Logic or the Sequential Chain, and thus allowing the actuators to be operated manually by signals generated in the Actuator ControlUnits. Note: By simply entering ‘Manual Mode’ no actuator may alter its status (e.g., position). Status: If the ‘Manual Mode’ command is entered after ‘Interrupt Cycle’ (i.e., when the pilot light ‘Cycle Interrupted’ is on), or after ‘Emergency Stop’ (i.e., when the pilot light ‘Emergency Stop Active’ is on), the status light will switch to ‘Manual Mode Active’, this then being the only status light switched on. When now in Manual Mode you operate an actuator, thus changing the value of a transition signal, the warning light ‘False Transition Signal’ will possibly (even probably) go on in addition to the status light ‘Manuel Mode Active’ already on. If the ‘Manual Mode’ command is entered after a ‘Reset’ command, i.e., while the pilot light ‘Sequential Chain Reset’ is on, this light remains on and the additional status light ‘Manual Mode Active’ is switched on. This is the only case where two status lights are on. But note that the indicator light ‘False Transition Signal’ is now dysfunctional.

End Manual Mode . An active manual mode can only be ended by entering the signal ‘End Manual Mode’. Status: After successfully ending manual mode, the only status light normally on is ‘Cycle Interrupted’, or, in the case of a prior ‘Reset’ command, ‘Ready’. Normally (meaning, ‘End Manual Mode’ was entered after either ‘Interrupt Cycle’ or ‘Emergency Stop’ followed by ‘Manual Mode’), the ‘End Manual Mode’ signal can only take effect if none of the actuators are in a false position (that is, if the status light ‘False Transition Signal’ is off) which you must ensure while still in manual mode. After ending manual mode, the circuit remains interrupted.

19.2 Input Control Commands and Status Visualisation

221

If you enter ‘End Manual Mode’ while both status lights ‘Sequential Chain Reset’ and ‘Manual Mode Active’ are on, the SCV-module switches to the ‘Ready’ status in which only this status light is on. But note that your over-all process may be in a false state. Refer to the ‘Reset’ command.

Continue Cycle . If the circuit is interrupted (i.e., the pilot light ‘Cycle Interrupted’ is on) entering a ‘Continue Cycle’ command lets the circuit continue operation in Single-Cycle Mode (even if it was running in Repeated-Cycles Mode prior to having been interrupted). Status: The status light switches from ‘Cycle Interrupted’ to ‘Single Cycle Running’.

Next Step . If the circuit is interrupted (i.e., the pilot light ‘Cycle Interrupted’ is on) you can successively step through the output commands of the Sequential Chain(s) one by one. Each time a Next Step command is entered, only the single next output of the Sequential Chain is activated. If you want to work through a complete cycle step by step, do not start with a ‘Single Cycle’ input command, or a ‘Repeat Cycles’ input command, start by entering ‘Interrupt Cycle’ and then use ‘Next Step’. If you change your mind about stepping through a cycle after entering ‘Interrupt Cycle’, you can return to the ‘Ready’ state by entering the following succession of input commands: ‘Reset’, ‘Manual Mode’, and ‘End Manual Mode’. Status: The status light ‘Cycle Interrupted’ is switched on as only status light.

Chapter 20

Actuator Control Units

To control a transitional controlled process we use an OCTOPUS circuit the general layout of which is exemplified in Figure 15.2. In general an OCTOPUS circuit consists of three types of modules. The most complicated of these modules is the Status Control and Visualisation (SCV) module which can be seen as the central brain of the control circuit, and whose operation is discussed in chapter 18 and further detailed in chapter 19 together with the circuit itself. On the other end, the simplest of the modules is (a single stage of) the Sequential Chain (SC) shown and explained in section 17.2. The third type of modules are the Actuator Control Units (ACUs). The operation of every actuator is governed by an ACU which handles the following quite daunting tasks of its actuator: • • • •

enabling the manual operation of the actuator, ability to switch between manual and automatic mode of operation, realising the emergency-stop behaviour of the actuator, and generating a warning signal for the false status of the actuator.

Once told by the central brain, the SCV, what to do, the Actuator Control Unit (the ACU) carries out its task autonomously meaning that the ACUs constitute the distributed intelligence of the OCTOPUS circuit. This chapter puts forth the circuitry of the ACUs of two types of actuators, the identity actuator of Figure 16.4 and the memory actuator of Figure 16.5. For each actuator type we need to develop four ACUs, one for each of the emergency stop situations defined in section 18.5: • • • •

Categorical Safety Position (CP), Reverse Direction (RD), Go to Safety Position (SP), Immediate Stop (IS).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_20

223

224

20 Actuator Control Units

Neither of the actuators discussed here can realise an Immediate Stop, so the ACUs presented below don’t cover this possibility. No doubt, incorporating the Immediate Stop when and where necessary will pose no difficulty. Realising Reverse Direction (RD) and Go to Safety Position (SP) in the case of an Emergency Stop requires us to know the exact status (e.g., position) of the actuator. In other words, the status of an actuator must continuously be monitored to know what change in its status (if any) an Emergency Stop signal must enforce. The continuous monitoring of an actuator’s status is also imperative when switching to and from manual mode of operation, here, we need to be able to ensure that this transition leaves the actuators stationary.

20.1 ACUs for Identity Actuators The easiest way to understand the working and organisation of an ACU is arguably by studying an example circuit. The circuit of our choice is shown in Figure 20.1b. Its inputs and outputs correspond to those of the symbolic representation of Figure 20.1a, a representation used in OCTOPUS circuits. Typical for any ACU is that it consists of three sub-circuits, one responsible for the manual operation of the actuator, another for its emergency stop behaviour, and the third to create a false status reaction. The sub-circuit for manual operation is a memory circuit (one with four inputs) the output .Z > of which governs the behaviour of the actuator. Using a memory circuit in this case allows us to deduce the status of the actuator when neither of its limit switches are activated, i.e., when the piston is in motion. We take the motion of the piston to coincide with the latch’s output value: When the latch’s output .Z > is 1 , the piston is moving outward, and when .Z > is 0 , the piston is moving inward. The sub-circuit to manually operate an identity actuator is the same for all the ACUs irrespective of their emergency stop behaviour. To actually specify and calculate the circuit we use the VK-map of Figure 20.2d which we discuss in detail further on. The sub-circuit that initiates an Emergency Stop is always an independent circuit in its own right, its design depending on the emergency stop behaviour to be realised (e.g., RD, SP, or CP). Its output variable, which is arbitrarily named T , influences the main output .Z> via the latch of the sub-circuit for manual operation so as to have a lasting, i.e., memorising effect. Here too, the actual specification and calculation of the circuit uses a VK-map as discussed below. The final sub-circuit of the ACU to be looked at outputs the False Status signal F S. But what is a false status or, with respect to the piston of the actuator being considered, a false position? A false status is a signal that is true when the desired status .> Z differs from the actual status .Z >. This functionality is expressed by the XOR: {F S} ⇔ {> Z} ⊕ {Z >}

.

20.1 ACUs for Identity Actuators

Fig. 20.1 The circuitry of the ACUs of Identity Actuators

225

226

20 Actuator Control Units

Fig. 20.2 VK-maps to calculate the circuitry of the ACUs for the Identity Actuators.

20.1 ACUs for Identity Actuators

227

or by .F ⇔ Y ⊕ Z when using the surrogate variables of Figure 20.2a 1 Now, to calculate and specify the circuits, we refer to the VK-maps of Figure 20.2 noting that in these we quite generally use the surrogate variable names defined in Figure 20.2a thereby simplifying the notation of the obtained logic formulas. Consider the VK-map of Figure 20.2d which specifies the sub-circuit to manually operate the actuator. When the input signal MANUAL MODE has not been given, meaning that .M = 0, the ACU’s output, Z, must coincide with the output T provided by the emergency stop sub-circuit this accounting for the leftmost and rightmost columns of the VK-map. This is abbreviated to ’.if M = 0 then Z = T ’, as written next to the VK-map. Now assume that MANUAL MODE has been successfully activated, so that .M = 1, in which case the actuator’s piston must be stationary in one of its limit positions. If only the push-button m, forcing the piston to move outward, is pressed (so that .m = 1 and .n = 0), the output Z of the considered sub-circuit must be 1. Importantly, Z must remain 1, must be memorised, when the push-button is not pressed any more, i.e., when .m = 0. This is expressed by entering Z in the two cells where .M = 1 and both m and n are 1. Inscribing the remaining cells for retracting the piston follow like considerations. The evaluation of the K-mpap of Figure 20.2d leads to Z ⇔ (T M ∨ m n M) ∨ M (m ∨ n) Z.

.

(20.1)

We now turn to the VK-map of Figure 20.2c which specifies the sub-circuit which causes the actuators piston to reverse its direction when an emergency stop is given. This circuit must transfer the input signal Y to the output T when no stop signal S is given which explains the two leftmost columns of the VK-map. The two rightmost columns (where S is 1) are explained row by row in the VK-map. The evaluation of the VK-map leads to T ⇔ S Y ∨ (S ∨ Y ) p q ∨ S Y q.

.

(20.2)

The VK-map governing the behaviour of the actuator when the piston is required to go to a safety position in the presence of an emergency stop signal is shown in Figure 20.2b. The difference to reversing the piston’s direction is minimal. The formula obtained is T ⇔ S Y ∨ (S ∨ Y ) p q.

.

(20.3)

The final case, realising a categorical safety position when an emergency stop signal is given, is a trivial sub-case, its circuit being shown in Figure 20.1d.

1 In mathematical formulas a string of characters put between special delimiters, say braces, can be used as a variable. When this leads to awkward notations, one tries to use a single variable surrogate character as done in Figure 20.2a.

228

20 Actuator Control Units

20.2 ACUs for Memory Actuators As in the case of the ACUs for the identity actuators, the ACUs for memory actuators consist of distinct sub-circuits to handle the emergency stop behaviour, that for manual operation, and the sub-circuit to generate the false status signal as shown in Figure 20.3 and Figure 20.4 to which we refer in this section. Other than in the case for the ACUs for identity actuators, the sub-circuits for emergency stop and for manual operation were developed intuitively. This was more easily possible as the sub-circuit for manual operation consists merely of two (symmetrical) parts that actually only switch between manual and automatic mode of operation. For the sub-circuit that realises emergency stop, we first concentrate on the circuit of Figure 20.4c which enforces the reverse direction behaviour. Similar to the circuit used for manual operation, the reverse-direction emergency stop circuit is actually only a switch ensuring that either .> Z− or .> Z+ (whichever is appropriate) is passed on to the sub-circuit for manual operation. The emergency stop subcircuit of Figure 20.4c, for reverse position, is then expanded for the safety position, Figure 20.4b, and simplified for the categorical position, Figure 20.3b. Turning our attention to the sub-circuit which creates the false status, F S, signal, the first thing to draw our attention is the complexity of the circuit as compared with the equivalent circuit of the ACU for the identity actuator (see Figure 20.1b). It is first important to realise that all input signals, .> Z− and .> Z+, and the associated output signals, .Z− > or .Z+ >, of the ACU for a memory actuator are short lived. To turn them into sustained signals, we use the latches shown. These signals can then be fed into to the XOR gates to create the false status signal F S.

20.3 On the Usage of the FS Output Signals The False Status signal, F S-signal, a given Actuator Control Unit, an ACU, emits, only informs of the status of the actuator governed by that ACU. The correct operation of the Status Control and Visualisation, SCV, unit requires that none of the ACUs emit a FS signal of 1. This means: The F S-signals, the false status signals, of all actuator control units, that output such signals, must be entered into a multiple OR-gate the output of which is sent to the input of the SCV unit marked F T S. The status of those actuators whose governing ACUs do not produce a FS output signal are not monitored. If any such actuator has a false status (is in a false position), the operator of the circuit is not told.

20.3 On the Usage of the FS Output Signals

229

Fig. 20.3 The circuitry of the ACU that realises the Categorical Position (CP) for an EMERGENCY STOP of a Memory Actuator.

230

20 Actuator Control Units

Fig. 20.4 The circuitry of the ACU that realises the Safety Position (SP) for an EMERGENCY STOP of a Memory Actuator.

Reviewing Part IV

Programmable Logic Controllers have been used in industrial and control processes for the last sixty years or so in an increasing scale, up to a point where they are so omnipresent that they are hardly noticed. While their hardware has developed enormously, their programming languages have practically remained unchanged for decades. Large sums of money go into training personnel to develop, test, maintain, and service PLC programs. Using OCTOPUS gives you the assurance of employing the probably best tested and most versatile system of modules presently available for designing circuits for transition controlled processes. Furthermore, its usage reduces programming time significantly, enables easy off-line testing, increases safety, and allows for remote maintenance, to mention only the immediately obvious advantages. Learning how to apply the modules of OCTOPUS is only a matter of hours for a versed and motivated engineer, and only a matter of a few days for a complete novice. The standardisation of terminology and the meaning of concepts is very important in the engineering word especially in safety sensitive fields such as automation. In this respect, special notice should be paid to the status visualisation graph of Figure 18.1 (a graph that should be a requirement for all commercially available systems) and the definitions of the input control commands of section 19.2. The primary breakthrough introduced with OCTOPUS is the introduction of the Actuator Control Unit enabling the realisation of Emergency Stop, and the generation of a False Transition Signal, the latter allowing us to safely switch back from manual to automatic operation. The ideas presented in Part IV, i.e., in OCTOPUS, will hopefully be picked up and expanded on.

Correction to: Logic Circuit Design

Correction to: Chapters 1 and 2 in: S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7

The book was inadvertently published with images that have been changed between the original manuscript and the book. This has been corrected, and the figures that appear in the book match those that were originally submitted.

The updated version of these chapters can be found at https://doi.org/10.1007/978-3-031-40673-7_1 https://doi.org/10.1007/978-3-031-40673-7_2 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7_21

C1

Bibliography

I have taken the liberty to present only a very short selection of books and papers so as to focus further reading. Of course the selection, especially on books, is highly subjective and not to be taken as a rating. The papers are chosen for their historical relevance. In these times of the internet I believe this selective presentation of literature justifiable and, possibly, advantageous.

Selected Books Brzozowski, J. A., M. Yoeli: ‘Digital Networks’. Prentice Hall, 1976. Brzozowski, J. A., C.-J. H. Seger: ‘Asynchronous Circuits’. Springer-Verlag, 1995. Caldwell, S. H.: ‘Switching Circuits and Logic Design’. John Wiley & Sons,1958. Clare, C. R.: ‘Designing Logic Systems using State Machines’. McGrew-Hill Book Company, 1973. Dietmayer, D. L.: ‘Logic Design of Digital Systems’. Allyn and Bacon, 1971. Fasol, K. H., P. Vingron: ‘Synthese industrieller Steuerungen’. R. Oldenbourg Verlag, 1975. Harrison, M. A.: ‘Introduction to Switching and Automata Theory’. McGrew-Hill Book Company, 1965. Kohavi, Z.: ‘Switching and Finite Automata Theory’. McGrew-Hill Book Company, 1970. Krieger, M.: ‘Basic Switching Circuit Theory’. McGrew-Hill Book Company, 1969. McCluskey, E. J.: ‘Logic Design Principles’. Prentice Hall, 1986. Muroga, S.: ‘Logic Design and Switching Theory’. John Wiley & Sons, 1979. Pessen, D. W.: ‘Industrial Automation’. John Wiley & Sons, 1989. Sandige, R. S.: ‘Modern Digeital Design’ . McGrew-Hill Publishing Company, 1990 Unger, S. H.: ‘Asynchronous Sequential Switching Circuits’. John Wiley & Sons, 1969. Vingron, S. P.: ‘Switching Theory’. Springer-Verlag, 2004. Zander, H.-J.: ‘Entwurf von Folgeschaltungen’. VEB Verlag Technik, 1974.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7

233

234

Bibliography

Selected Papers Ashenhurst, R. A.: ‘The Decomposition of Switching Functions’. Proc. of an International Symposium on the Theory of Switching, April 2–5, 1957. Annals of the Compoutation Lab. of Harvard University, vol. 29, pp. 74–116, 1959. Huffman, D. A.: ‘The Synthesis of Sequential Switching Circuits’. Jour. of the Franklin Institute, vol. 257, pp. 161–190, March 1954 and pp. 275–303, April 1954. Huffman, D. A.: ‘A Study of Memory Requirements of Sequential Switching Circuits’. Massachusetts Institute of Technology, Research Laboratory of Electronics, Technical Report No. 293, April 1955. Huffman, D. A.: ‘The Design and Use of Hazard-Free Switching Networks’. Journal of the Association of Computing Machinery, vol. 4, pp. 47–62, January 1957, Karnaugh, M.: ‘The Map Method for Synthesis of Combinational Logic Circuits’. Trans. AIEE, part I, vol. 72, no. 9, pp. 593–599, 1953. McCluskey, E. J.: ‘Minimisation of Boolean Functions’. Bell Systems, Technical Journal, vol. 35, no. 6, pp. 1417–1445, 1956. Mealy, G. H.: ‘A Method for Synthesizing Sequential Circuits’. Bell Systems, Technical Journal, vol. 34, no. 5, pp. 1045–1079, 1955. Medvedev, I. T.: ‘On a Class of Events Representable in a Finite Automaton’. Avtomaty, Moscow, pp. 385–401, 1956. English translation in MIT Lincoln Laboratory Group Report, pp. 34–73. June, 1958. Moore, E. F.: ‘Gedankenexperiments on Sequential Machines’. In ‘Automata Studies’. Princeton University Press,1956. Quine, W. V.: The Problem of Simplifying Truth Functions’. American Mathematical Monthly, vol. 59, pp. 521–531, 1952. Quine, W. V.: ‘A Way to Simplify Truth Functions’. American Mathematical Monthly, vol. 63, pp. 627–631, 1955. Shannon, C. E.: ‘A Symbolic Analysis of Relays and Switching Circuits’. Trans. AIEE vol. 57, pp. 713–723, 1938. Tracey, J. H.: ‘Internal State Assignment for Asynchronous Sequential Circuits’. IEEE Trans. on Electronic Computers, vol. EC-15, pp. 551–560, 1966. Unger, S. H.: ‘Hazards and Delays in Asynchronous Sequential Circuits’. IRE Trans. on Circuit Theory, vol. CT-6, pp. 12–25, 1959. Veitch, E. W.: ‘A Chart Method for Simplifying Truth Functions’. Proc. Pittsburgh Assoc. of Comp. Mach., May 1952. Vingron, P.: ‘Coherent Design of Sequential Circuits’. IEE Proc., vol. 130E, pp. 190–201, 1983. Zhegalkin, I. I.: The French title of the Russian original is Gégalkin I. I.: ‘Sur le calcul des propositions dans la logique symbolique’. Mat. Sbornik, vol. 34, pp. 9–28, 1927.

Backmatter

This book is intended as a wakeup call in the stagnant and dormant field of switching algebra and logic circuit design. It presents the material in a concise but thorough way. The topics selected are an in-depth presentation of switching algebra, a theory of memory circuits (sometimes called flop flops), a new approach to asynchronous circuits, and the presentation of a new programming technique (or language) for programmable logic controllers. Be ready for the unorthodox and controversial.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7

235

Index

A Actuator Control Unit (ACU), see Modelling ACUs Adjacency, 71 of arbitrary K-sets, 74, 75 of elementary K-sets, 74 operator, 74 vector, 74 APL, 27 Associativity, 31, 33 (see also Fig. 3.14)

C Catenation, 90 combinational circuits, 90–91 iterative, 169–171 latches, 129–134 Central Control Unit (CCU), 183 Circuit, 7 combinational, 8 completely specified, 9 incompletely specified, 9 sequential, 8 well-behaved, 7 Commutativity, 31, 33 (see also Fig. 3.13) Composition of combinational circuits, 89 generic function g, 89, 93–95 hinge function h, 89 hinge-sets .Y0 , .Y1 , 94 problem visualisation, 92–93 specified function f , 89

Composition of latches partial generic functions, 131 Consensus, 72, 77 operator, 77 Convention of adhesion, 28 Convention of separation, 29 Convention on omitting .∧, 29 Cover, 61 definition (see Eq. (5.13)) of elementary K-sets, 63 irredundant, 64 minimal, 85 minimal irredundant, 64 of prime sets, 63 Cycle description, 181

D DeMorgan, 38 Development platform, 189 compiling and downloading, 189 off-line testing, 189 screenshots logi.CAD, 189 Distributivity, 33 complete- (see Fig. 3.16) left- (see Fig. 3.16) one-sided (see Fig. 3.17) right- (see Fig. 3.16) Dual connectives, see Fig. 3.2 E Eccles-Jordan latch, 135 used as hinge latch, 136

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. P. Vingron, Logic Circuit Design, https://doi.org/10.1007/978-3-031-40673-7

237

238 Event, see Input event Events graph, 7, 145–146 Events table, 8 brief form, 8 explicit form, 9 general (see Fig. 4.1) Events tree, 146–148 edge, 146 input word, 146 law of feedback, 148 node, 146 partial output, 146, 167 Exclusions, 108–109, 112

F Flow table, 148–151 buzzing, 151 condition, 149 next, 150 present, 150 stable, 150 unstable, 150 principle of stability, 150 sequential automaton, 148, 149 state (internal), 149 state variable, 149 Function, 8 Functionally complete, see Fig. 3.2

G Gate, 11, 30 connecting logic gates, 11 derived (see Fig. 3.9) dyadic, 30 monadic, 30 multiple inputs, 31 Gray code, see Fig. 5.2

H Huffman, see Flow table Hysteresis, 6

I IEC 61131-3, 179 programming languages, 179 IEC 61131-3 standard, 178 Inclusions, 108–109, 112 Input event, 8, 57 memorising, 105 multiple, 57

Index resetting, 61, 105 setting, 61, 105 Instance, 10 K Karnaugh set, see K-set Karnaugh tree (K-tree), 66 K-set, 55, 56 disjoint K-sets, 75 elementary, 57 maximum, 57 L Latch delayed feedback, 121–123 fail-safe design, 124–126 formal definition (see Memory function) informal definition, 103 Law of feedback, 148 Location, 8 Logic formula, 10 complementary, 11 Logic functions, see Fig. 3.1 Logic gates, see Gate Logic variable, 10, 13 definition (see Eq. (1.2)) M Main brain, 184 Map entered variable, 65 Maxterm, 45 generalised, 59 interpretation, 45 Memorisation hazard, 119 in a conjunctive latch, 120 in a disjunctive latch, 119 Memory evaluation-formulas, 112 basic feedback -, 116 general feedback -, 117 Memory function, 105 well-behaved, 105–107 Minimisation iterative consensus, 79 Quine, 46 table of possible covers, 83 Minterm, 44 generalised, 55 interpretation, 45 mnemonic evaluation, 44 Modelling ACUs identity actuators, 191 linear motion, 191 memory actuators, 193

Index N Normal forms, 45, 48 AND-to-OR, 45 canonical, 45 OR-to-AND, 45 Zhegalkin, 48 Zhegalkin, dual form, 50 Numeric variable, 13 O OCTOPUS, 178, 181–188 multi-path programs, 199 OCTOPUS modes of operation Automatic END CYCLE, 207, 219 REPEAT CYCLES, 207, 218 SINGLE CYCLE, 207, 218 See also Fig. 18. Emergency Stop, 212 Categorical Safety Position (CP), 212 EMERGENCY STOP, 219 Go to Safety Position (SP), 212 Immediate Stop (IS), 212 RESET, 219 Reverse Direction (RD), 212 Interrupt CONTINUE CYCLE, 210, 221 INTERRUPT CYCLE, 210, 219 NEXT STEP, 210, 221 See also Fig. 18.3 Manual END MANUAL MODE, 210, 220 FALSE TRANSITION SIGNAL, 210 MANUAL MODE, 210, 220 See also Fig. 18.04 Reset, 213 State Visualisation Graph (see Fig. 18.1) OCTOPUS modules ACUs for a Memory Actuator (see Fig. 20.3 and Fig. 20.4) ACUs for an Identity Actuator (see Fig. 20.1) Choose module (see Fig. 17.5) Identity Actuator (see Fig. 16.4) Menory Actuator (see Fig. 16.5) SCV module (see Fig. 19.1) Sequential Chain (see Fig. 17.3) P PLC, see Programmable Logic Controller (PLC)

239 Pneumatic valve, 13 normally closed (NC), 15 normally open (NO), 14 normal state, 14 shuttle valve, 16 spool-and-sleeve, 13 Prime set, 63 essential, 82 Programmable Logic Controller (PLC), 178 Proposition, 10

R Redundance, 4 Relation, 8 Relay, 13, 17 feedback and latch, 21 inverted circuit, 20 network, 19 parallel, 19 serial, 19 normally closed (NC), 18 normally open (NO), 18 normal state, 18, 19 symbolic representation, 18 transmission, 18

S Sampling latch, 104 Schmitt trigger, 6 SCV unit, see Status Control and Visualization unit (SCV) Self-reference table, 169 compound, 171–173 Sequential Chain, 181, 185 Sequential Function Chart (SFC) action, 195 initial latch, 196 stage, 195 transition condition, 195 transition signal, 195 Signal, 5, 13 analogue, 5 binary, 4, 6, 13 falling edge, 7 rising edge, 7 Status Control and Visualization unit (SCV), 184 Sub-sequences, deterministic, 167 Subset operator, 78

240 Sub-tree, 146 base tree, 146 identical sub-trees, 157 internal sub-tree, 146 Switching circuit, see Circuit T Tautology, 10 logical equivalence, 10 Template, 185 Timing diagram, 4, 7 binary, 7 Transistor, 22 CMOS, 22 IG-FET, 22 MOS, 22 NMOS, 22 PMOS, 22 transition controlled processes, 178

Index Truth table, 60

V Vacuum tube, 22 Veitch-Karnaugh, see VK Verification graph, 160 Verification tree, 160 VK-map, 53 dimension, 65 multiple inputs (see Fig. 5.3) proving logic theorems, 59 reduced, 65 standard, 65 VK evaluation formulas, 62

W Well-behaved circuit, 7, 147 Well-behaved latch, see memeory function