226 56 2MB
English Pages 374 Year 2015
Lena Wiese Advanced Data Management De Gruyter Graduate
Weitere empfehlenswerte Titel Datenbanksysteme, 10. Auflage A. Kemper, 2016 ISBN 978-3-11-044375-2
Analyse und Design mit der UML 2.5, 11. Auflage B. Oestereich, A. Scheithauer, 2011 ISBN 978-3-486-72140-9
Die UML-Kurzreferenz 2.5 für die Praxis, 6. Auflage B. Oestereich, A. Scheithauer, 2014 ISBN 978-3-486-74909-0
Algorithmen – Eine Einführung, 4. Auflage T. Cormen et al., 2013 ISBN 978-3-486-74861-1
Lena Wiese
Advanced Data Management
For SQL, NoSQL, Cloud and Distributed Databases
Author Dr. Lena Wiese Georg-August-Universität Göttingen Fakultät für Mathematik und Informatik Institut für Informatik Goldschmidtstraße 7 37077 Göttingen Germany [email protected]
ISBN 978-3-11-044140-6 e-ISBN (PDF) 978-3-11-044141-3 e-ISBN (EPUB) 978-3-11-043307-4 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2015 Walter de Gruyter GmbH, Berlin/Boston Cover image: Tashatuvango/iStock/thinkstock Printing and binding: CPI books GmbH, Leck ♾ Printed on acid-free paper Printed in Germany www.degruyter.com
| To my family
Preface During the last two decades, the landscape of database management systems has changed immensely. Based on the fact that data are nowadays stored and managed in network of distributed servers (“clusters”) and these servers consist of cheap hardware (“commodity hardware”), data of previously unthinkable magnitude (“big data”) are produced, transferred, stored, modified, transformed, and in the end possibly deleted. This form of continuous change calls for flexible data structures and efficient distributed storage systems with both a high read and write throughput. In many novel applications, the conventional table-like (“relational”) data format may not the data structure of choice – for example, when easy exchange of data or fast retrieval become vital requirements. For historical reasons, conventional database management systems are not explicitly geared toward distribution and continuous change, as most implementations of database management systems date back to a time where distributed storage was not a major requirement. These deficiencies might as well be attributed to the fact that conventional database management systems try to incorporate several database standards as well as have high safety guarantees (for example, regarding concurrent user accesses or correctness and consistency of data). Several kinds of database systems have emerged and evolved over the last years that depart from the established tracks of data management and data formats in different ways. Development of these emergent systems started from scratch and gave rise to new data models, new query engines and languages, and new storage organizations. Two things are particularly remarkable features of these systems: on the one hand, a wide range of open source products are available (though some systems are supported by or even originated from large international companies) and development can be observed or even be influenced by the public; on the other hand, several results and approaches achieved by long-standing database research (having its roots at least as early as the 1960s) have been put into practice in these database systems and these research results now show their merits for novel applications in modern data management. On the downside, there are basically no standards (with respect to data formats or query languages) in this novel area and hence portability of application code or long-term support can usually not be guaranteed. Moreover, these emerging systems are not as mature (and probably not as reliable) as conventional established systems. The term NOSQL has been used as an umbrella term for several emerging database systems without an exact formal definition. Starting with the notion of NoSQL (which can be interpreted as saying no to SQL as a query language) it has evolved to mean “not only SQL” (and hence written as NOSQL with a capital O). The actual origin of the term is ascribed to the 2009 “NOSQL meetup”: a meeting with presentations of six database systems (Voldemort, Cassandra, Dynomite, HBase, Hypertable, and CouchDB). Still, the question of what exactly a NOSQL database system is cannot be answered unanimously; nevertheless, some structure slowly becomes visible in the
VIII | Preface NOSQL field and has led to a broad categorization of NOSQL database systems. Main categories of NOSQL systems are key-value stores, document stores, extensible record stores (also known as column family stores) and graph databases. Yet, other creatures live out there in the database jungle: object databases and XML databases do not espouse the relational data model nor SQL as a query language – but they typically would not be considered NOSQL database systems (probably because they predate the NOSQL systems). Moreover, column stores are an interesting variant of relational database systems. This book is meant as a textbook for computer science lectures. It is based on Master-level database lectures and seminars held at the universities of Hildesheim and Göttingen. As such it provides a formal analysis of alternative, non-relational data models and storage mechanisms and gives a decent overview of non-SQL query languages. However, it does not put much focus on installing or setting up database systems and hence complements other books that concentrate on more technical aspects. This book also surveys storage internals and implementation details from an abstract point of view and describes common notions as well as possible design choices (rather than singling out one particular database system and specializing on its technical features). This book intends to give students a perspective beyond SQL and relational database management systems and thus covers the theoretical background of modern data management. Nevertheless this book is also aimed at database practitioners: it wants to help developers or database administrators coming to an informed decision about what database systems are most beneficial for their data management requirements.
Overview This book consists of four parts. Part I Introduction commences the book with a general introduction to the basics of data management and data modeling. Chapter 1 Background (page 3) provides a justification why we need databases in modern society. Desired properties of modern database systems like scalability and reliability are defined. Technical internals of database management systems (DBMSs) are explained with a focus on memory management. Central components of a DBMS (like buffer manager or recovery manager) are explored. Next, database design is discussed; a brief review of Entity-Relationship Models (ERM) and the Unified Modeling Language (UML) rounds this chapter off. Chapter 2 Relational Database Management Systems (page 17) contains a review of the relational data model by defining relation schemas, database schemas and database constraints. It continues with a example of how to transform an ERM into a relational database schema. Next, it illustrates the core concepts of relational database theory like normalization to avoid anomalies, referential integrity, relational query languages (relational calculus, relational algebra and SQL), concurrency management and transactions (including the ACID properties, concurrency control and scheduling). Part II NOSQL And Non-Relational Databases comprises the main part of this book. In its eight chapters it gives an in-depth discussion of data models and database systems that depart from the conventional relational data model. Chapter 3 New Requirements, “Not only SQL” and the Cloud (page 33) admits that relational databases mangement systems (RDMBSs) have their strengths and merits but then contrasts them with cases where the relational data model might be inadequate and touches on weaknesses that current implementations of relational DBMSs might have. The chapter concludes with a description of current challenges in data management and a definition of NOSQL databases. Chapter 4 Graph Databases (page 41) begins by explaining some basics of graph theory. Having presented several choices for graph data structures (from adjacency matrix to incidence list), it describes the predominant data model for graph databases: the property graph model. After a brief digression of how to map graphs to an RDBMS, two advanced types of graphs are introduced: hypergraphs and nested graphs. Chapter 5 XML Databases (page 69) expounds the basics of XML (like XML documents and schemas, and numbering schemes) and surveys XML query languages. Then, the chapter shifts to the issue of storing XML in an RDBMS. Finally, the chapter describes the core concepts of native XML storage (like indexing, storage management and concurrency control).
X | Overview Chapter 6 Key-value Stores and Document Databases (page 105) puts forward the simple data structure of key-value pairs and introduces the map-reduce concept as a pattern for parallelized processing of key-value pairs. Next, as a form of nested key-value pairs, the Java Script Object Notation (JSON) is introduced. JSON Schema and Representational State Transfer are further topics of this chapter. Chapter 7 Column Stores (page 143) outlines the column-wise storage of tabular data (in contrast to row-wise storage). Next, the chapter delineates several ways for compressed storage of data to achieve a more compact representation based on the fact that data in a column is usually more uniform than data in a row. Lastly, column striping is introduced as a recent methodology to convert nested records into a columnar representation. Chapter 8 Extensible Record Stores (page 161) describes a flexible multidimensional data model based on column families. The surveyed database technologies also include ordered storage and versioning. After defining the logical model, the chapter explains the core concepts of the storage structures used on disk and the ways to handle writes, reads and deletes with immutable data files. This also includes optimizations like indexing, compaction and Bloom filters. Chapter 9 Object Databases (page 193) starts with a review of object-oriented notions and concepts; this review gives particular focus to object identifiers, object normalization and referential integrity. Next, several options for object-relational mapping (ORM) – that is, how to store object in an RDBMS – are discussed; the ORM approach is exemplified with the Java Persistence API (JPA). The chapter moves on to object-relational databases that offer object-oriented extensions in addition to their basic RDBMS functionalities. Lastly, several issues of storing objects natively with an Object Database Management System (ODBMS) – like for example, object persistence and reference management – are attended to. Part III Distributed Data Management treats the core concepts of data management when data are scaled out – that is, data are distributed in a network of database servers. Chapter 10 Distributed Database Systems (page 235) looks at the basics of data distribution. Failures in distributed systems and requirements for distributed database management systems are addressed. Chapter 11 Data Fragmentation (page 245) targets ways to split data across a set of servers which are also known under the terms partitioning or sharding. Several fragmentation strategies for each of the different data models are discussed. Special focus is given to consistent hashing. Chapter 12 Replication And Synchronization (page 261) elucidates the background on replication for sake of increased availability and reliability of the database systems. Afterwards, replication-related issues like distributed concurrency control and consensus protocols as well hinted handoff and Merkle trees are discussed.
Overview | XI
Chapter 13 Consistency (page 295) touches upon the topic of relaxing strong consistency requirements known from RDBMSs into weaker forms of consistency. Part IV Conclusion is the final part of this book. Chapter 14 Further Database Technologies (page 311) gives a cursory overview of related database topics that are out of the scope of this book. Among other topics, it glimpses at data stream processing, in-memory databases and NewSQL databases. Chapter 15 Concluding Remarks (page 317) summarizes the main points of this book and discusses approaches for database reengineering and data migration. Lastly, it advocates the idea of polyglot architectures: for each of the different data storage and processing tasks in an enterprise, users are free to choose a database system that is most appropriate for one task while using different database systems for other tasks and lastly integrating these systems into a common storage and processing architecture.
Contents Preface | VII Overview | IX List of Figures | XIX List of Tables | XXII
Part I: Introduction 1 1.1 1.2 1.3 1.3.1 1.3.2 1.4
Background | 3 Database Properties | 3 Database Components | 5 Database Design | 7 Entity-Relationship Model | 8 Unified Modeling Language | 11 Bibliographic Notes | 14
2 2.1 2.1.1 2.1.2 2.2 2.3 2.4 2.5 2.5.1 2.5.2 2.6
Relational Database Management Systems | 17 Relational Data Model | 17 Database and Relation Schemas | 17 Mapping ER Models to Schemas | 18 Normalization | 19 Referential Integrity | 20 Relational Query Languages | 22 Concurrency Management | 24 Transactions | 24 Concurrency Control | 26 Bibliographic Notes | 28
Part II: NOSQL And Non-Relational Databases 3 3.1 3.1.1 3.1.2 3.1.3
New Requirements, “Not only SQL” and the Cloud | 33 Weaknesses of the Relational Data Model | 33 Inadequate Representation of Data | 33 Semantic Overloading | 34 Weak Support for Recursion | 34
XIV | Contents 3.1.4 3.2 3.3 3.4
Homogeneity | 35 Weaknesses of RDBMSs | 36 New Data Management Challenges | 37 Bibliographic Notes | 39
4 4.1 4.1.1 4.1.2 4.2 4.2.1 4.2.2 4.2.3 4.2.4 4.2.5 4.3 4.4 4.5 4.6 4.6.1 4.6.2 4.6.3 4.7
Graph Databases | 41 Graphs and Graph Structures | 41 A Glimpse on Graph Theory | 42 Graph Traversal and Graph Problems | 44 Graph Data Structures | 45 Edge List | 46 Adjacency Matrix | 46 Incidence Matrix | 48 Adjacency List | 50 Incidence List | 51 The Property Graph Model | 53 Storing Property Graphs in Relational Tables | 56 Advanced Graph Models | 58 Implementations and Systems | 62 Apache TinkerPop | 62 Neo4J | 65 HyperGraphDB | 66 Bibliographic Notes | 68
5 5.1 5.1.1 5.1.2 5.1.3 5.1.4 5.1.5 5.1.6 5.2 5.2.1 5.2.2 5.2.3 5.3 5.3.1 5.3.2 5.3.3 5.4 5.4.1
XML Databases | 69 XML Background | 69 XML Documents | 69 Document Type Definition (DTD) | 71 XML Schema Definition (XSD) | 73 XML Parsers | 75 Tree Model of XML Documents | 76 Numbering Schemes | 78 XML Query Languages | 81 XPath | 81 XQuery | 82 XSLT | 83 Storing XML in Relational Databases | 84 SQL/XML | 84 Schema-Based Mapping | 86 Schemaless Mapping | 89 Native XML Storage | 90 XML Indexes | 90
Contents |
5.4.2 5.4.3 5.5 5.5.1 5.5.2 5.6
Storage Management | 92 XML Concurrency Control | 97 Implementations and Systems | 100 eXistDB | 100 BaseX | 102 Bibliographic Notes | 104
6 6.1 6.1.1 6.2 6.2.1 6.2.2 6.2.3 6.3 6.3.1 6.3.2 6.3.3 6.3.4 6.3.5 6.3.6 6.3.7 6.3.8 6.3.9 6.4
Key-value Stores and Document Databases | 105 Key-Value Storage | 105 Map-Reduce | 106 Document Databases | 109 Java Script Object Notation | 110 JSON Schema | 112 Representational State Transfer | 116 Implementations and Systems | 118 Apache Hadoop MapReduce | 118 Apache Pig | 121 Apache Hive | 127 Apache Sqoop | 128 Riak | 129 Redis | 132 MongoDB | 133 CouchDB | 136 Couchbase | 139 Bibliographic Notes | 140
7 7.1 7.1.1 7.1.2 7.2 7.3 7.3.1 7.3.2 7.4
Column Stores | 143 Column-Wise Storage | 143 Column Compression | 144 Null Suppression | 149 Column striping | 151 Implementations and Systems | 158 MonetDB | 158 Apache Parquet | 158 Bibliographic Notes | 159
8 8.1 8.2 8.2.1 8.2.2 8.2.3
Extensible Record Stores | 161 Logical Data Model | 161 Physical storage | 166 Memtables and immutable sorted data files | 166 File format | 169 Redo logging | 171
XV
XVI | Contents 8.2.4 8.2.5 8.3 8.3.1 8.3.2 8.3.3 8.3.4 8.4 9 9.1 9.1.1 9.1.2 9.1.3 9.1.4 9.2 9.2.1 9.2.2 9.2.3 9.2.4 9.3 9.3.1 9.3.2 9.4 9.5 9.5.1 9.5.2 9.5.3 9.5.4 9.6 9.6.1 9.6.2 9.7
Compaction | 173 Bloom filters | 175 Implementations and Systems | 181 Apache Cassandra | 181 Apache HBase | 185 Hypertable | 187 Apache Accumulo | 189 Bibliographic Notes | 191 Object Databases | 193 Object Orientation | 193 Object Identifiers | 194 Normalization for Objects | 196 Referential Integrity for Objects | 200 Object-Oriented Standards and Persistence Patterns | 200 Object-Relational Mapping | 202 Mapping Collection Attributes to Relations | 203 Mapping Reference Attributes to Relations | 204 Mapping Class Hierarchies to Relations | 204 Two-Level Storage | 208 Object Mapping APIs | 209 Java Persistence API (JPA) | 209 Apache Java Data Objects (JDO) | 215 Object-Relational Databases | 217 Object Databases | 222 Object Persistence | 223 Single-Level Storage | 224 Reference Management | 226 Pointer Swizzling | 226 Implementations and Systems | 229 DataNucleus | 229 ZooDB | 230 Bibliographic Notes | 232
Part III: Distributed Data Management 10 10.1 10.2 10.3 10.4
Distributed Database Systems | 235 Scaling horizontally | 235 Distribution Transparency | 236 Failures in Distributed Systems | 237 Epidemic Protocols and Gossip Communication | 239
Contents |
10.4.1 10.4.2 10.5
Hash Trees | 241 Death Certificates | 243 Bibliographic Notes | 244
11 Data Fragmentation | 245 11.1 Properties and Types of Fragmentation | 245 11.2 Fragmentation Approaches | 249 11.2.1 Fragmentation for Relational Tables | 249 11.2.2 XML Fragmentation | 250 11.2.3 Graph Partitioning | 252 11.2.4 Sharding for Key-Based Stores | 253 11.2.5 Object Fragmentation | 254 11.3 Data Allocation | 255 11.3.1 Cost-based allocation | 256 11.3.2 Consistent Hashing | 257 11.4 Bibliographic Notes | 259 12 Replication And Synchronization | 261 12.1 Replication Models | 261 12.1.1 Master-Slave Replication | 262 12.1.2 Multi-Master Replication | 263 12.1.3 Replication Factor and the Data Replication Problem | 263 12.1.4 Hinted Handoff and Read Repair | 265 12.2 Distributed Concurrency Control | 266 12.2.1 Two-Phase Commit | 266 12.2.2 Paxos Algorithm | 268 12.2.3 Multiversion Concurrency Control | 276 12.3 Ordering of Events and Vector Clocks | 276 12.3.1 Scalar Clocks | 277 12.3.2 Concurrency and Clock Properties | 280 12.3.3 Vector Clocks | 281 12.3.4 Version Vectors | 284 12.3.5 Optimizations of Vector Clocks | 289 12.4 Bibliographic Notes | 293 13 Consistency | 295 13.1 Strong Consistency | 295 13.1.1 Write and Read Quorums | 298 13.1.2 Snapshot Isolation | 300 13.2 Weak Consistency | 302 13.2.1 Data-Centric Consistency Models | 303 13.2.2 Client-Centric Consistency Models | 305
XVII
XVIII | Contents 13.3 13.4
Consistency Trade-offs | 306 Bibliographic Notes | 307
Part IV: Conclusion 14 14.1 14.2 14.3 14.4 14.5 14.6 14.7
Further Database Technologies | 311 Linked Data and RDF Data Management | 311 Data Stream Management | 312 Array Databases | 313 Geographic Information Systems | 314 In-Memory Databases | 315 NewSQL Databases | 315 Bibliographic Notes | 316
15 Concluding Remarks | 317 15.1 Database Reengineering | 317 15.2 Database Requirements | 318 15.3 Polyglot Database Architectures | 320 15.3.1 Polyglot Persistence | 320 15.3.2 Lambda Architecture | 322 15.3.3 Multi-Model Databases | 322 15.4 Implementations and Systems | 324 15.4.1 Apache Drill | 324 15.4.2 Apache Druid | 326 15.4.3 OrientDB | 327 15.4.4 ArangoDB | 330 15.5 Bibliographic Notes | 331 Bibliography | 333 Index | 347
List of Figures 1.1 1.2 1.3
Database management system and interacting components | 5 ER diagram | 11 UML diagram | 15
2.1
An algebra tree (left) and its optimization (right) | 24
3.1
Example for semantic overloading | 34
4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9
A social network as a graph | 41 Geographical data as a graph | 42 A property graph for a social network | 55 Violation of uniqueness of edge labels | 56 Two undirected hyperedges | 58 A directed hyperedge | 59 An oriented hyperedge | 60 A hypergraph with generalized hyperedge “Citizens” | 60 A nested graph | 62
5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11
Navigation in an XML tree | 77 XML tree | 78 XML tree with preorder numbering | 79 Pre/post numbering and pre/post plane | 79 DeweyID numbering | 80 Chained memory pages | 93 Chained memory pages with text extraction | 94 B-tree structure for node IDs in pages | 95 Page split due to node insertion | 96 Conflicting accesses in an XML tree | 98 Locks in an XML tree | 99
6.1 6.2
A map-reduce example | 107 A map-reduce-combine example | 109
7.1
Finite state machine for record assembly | 157
8.1 8.2 8.3 8.4
Writing to memory tables and data files | 167 Reading from memory tables and data files | 168 File format of data files | 170 Multilevel index in data files | 171
XX | List of Figures 8.5 8.6 8.7 8.8 8.9 8.10
Write-ahead log on disk | 172 Compaction on disk | 173 Leveled compaction | 175 Bloom filter for a data file | 176 A Bloom filter of length m = 16 with three hash functions | 178 A partitioned Bloom filter with k = 4 and partition length m′ = 4 | 181
9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10
Generalization (left) versus abstraction (right) | 195 Unnormalized objects | 197 First object normal form | 198 Second object normal form | 198 Third object normal form | 199 Fourth object normal form | 200 Simple class hierarchy | 205 Resident Object Table (grey: resident, white: non-resident) | 227 Edge Marking (grey: resident, white: non-resident) | 228 Node Marking (grey: resident, white: non-resident) | 228
10.1
A hash tree for four messages | 242
11.1 11.2 11.3 11.4 11.5
XML fragmentation with shadow nodes | 252 Graph partitioning with shadow nodes and shadow edges | 253 Data allocation with consistent hashing | 257 Server removal with consistent hashing | 258 Server addition with consistent hashing | 259
12.1 12.2 12.3 12.4 12.5 12.6 12.7 12.8 12.9 12.10 12.11 12.12 12.13 12.14 12.15 12.16
Master-slave replication | 262 Master-slave replication with multiple records | 263 Multi-master replication | 263 Failure and recovery of a server | 264 Failure and recovery of two servers | 264 Two-phase commit: commit case | 267 Two-phase commit: abort case | 268 A basic Paxos run without failures | 270 A basic Paxos run with a failing leader | 272 A basic Paxos run with a dueling proposers | 273 A basic Paxos run with a minority of failing acceptors | 274 A basic Paxos run with a majority of failing acceptors | 275 Lamport clock with two processes | 279 Lamport clock with three processes | 279 Lamport clock totally ordered by process identifiers | 280 Lamport clock with independent events | 281
List of Figures | XXI
12.17 12.18 12.19 12.20 12.21 12.22
Vector clock | 283 Vector clock with independent events | 284 Version vector synchronization with union merge | 287 Version vector synchronization with siblings | 288 Version vector with replica IDs and stale context | 291 Version vector with replica IDs and concurrent write | 292
13.1 13.2 13.3
Interfering operations at three replicas | 296 Serial execution at three replicas | 297 Read-one write-all quorum (left) and majority quorum (right) | 298
15.1 15.2 15.3
Polyglot persistence with integration layer | 321 Lambda architecture | 323 A multi-model database | 324
List of Tables 2.1 2.2 2.3
A relational table | 17 Unnormalized relational table | 20 Normalized relational table | 21
3.1 3.2
Base table for recursive query | 35 Result table for recursive query | 35
4.1 4.2 4.3 4.4
Node table and attribute table for a node type | 56 Edge table | 57 Attribute table for an edge type | 57 General attribute table | 57
5.1 5.2
Schema-based mapping | 88 Schemaless mapping | 89
7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12
Run-length encoding | 145 Bit-vector encoding | 145 Dictionary encoding | 146 Dictionary encoding for sequences | 146 Frame of reference encoding | 147 Frame of reference encoding with exception | 147 Differential encoding | 148 Differential encoding with exception | 148 Position list encoding | 150 Position bit-string encoding | 150 Position range encoding | 151 Column striping example | 157
8.1 8.2 8.3
Library tables revisited | 161 False positive probability for m = 4 · n | 180 False positive probability for m = 8 · n | 180
9.1 9.2 9.3
Unnormalized representation of collection attributes | 203 Normalized representation of collection attributes | 204 Collection attributes as sets | 219
11.1 11.2
Vertical fragmentation | 249 Horizontal fragmentation | 250
| Part I: Introduction
1 Background Database systems are fundamental for the information society. Every day, an inestimable amount of data is produced, collected, stored and processed: online shopping, sending emails, using social media, or seeing your physician are just some of the day-to-day activities that involve data management. A properly working database management system is hence crucial for a smooth operation of these activities. In this chapter, we introduce the principles and properties that a database system should fulfill. Database management systems and their components as well as data modeling are the other two basic concepts treated in this chapter.
1.1 Database Properties As data storage plays such a crucial role in most applications, database systems should guarantee a correct and reliable execution in several use cases. From an abstract perspective, we desire that a database system fulfill the following properties: Data management. A database system not only stores data, it must just as well support operations for retrieval of data, searches for data and updates on data. To enable interoperability with external applications, the database system must provide communication interfaces or application programming interfaces for several communication protocols or programming languages. A database system should also support transactions: A transaction is a sequence of operations on data in a database that must not be interrupted. In other words, the database executes operations within a transaction according to the “all or nothing” principle: Either all operations succeed to their full extent or none of the operations is executed (and the subsequence of operations that was already executed is undone). Scalability. The amount of data processed daily with modern information technology is tremendous. Processing these data can only be achieved by distribution of data in a network of database servers and a high level of parallelization. Database systems must flexibly react and adapt to a higher workload. Heterogeneity. When collecting data or producing data (as output of some program), these data are usually not tailored to being stored in a relational table format. While the data in relational format are called structured and have a fixed schema which prescribes the structure of the data, data often come in different formats. Data that have a more flexible structure than the table format are called semi-structured; these can be tree-like structures (as used in XML documents) or – more generally – graph structures. Furthermore, data can be entirely unstructured (like arbitrary text documents). Efficiency. The majority of database applications need fast database systems. Online shopping and web searches rely on high-performance search and retrieval
4 | 1 Background operations. Likewise, other database operations like store and update must be executed in a speedy fashion to ensure operability of database applications. Persistence. The main purpose of a database system is to provide a long-term storage facility for data. Some modern database applications (like data stream processing) just require a kind of selective persistence: only some designated output data have to be stored onto long-term storage devices, whereas the majority of the data is processed in volatile main memory and discarded afterwards. Reliability. Database systems must prevent data loss. Data stored in the database system should not be distorted unintentionally: data integrity must be maintained by the database system. Storing copies of data on other servers or storage media (a mechanism called physical redundancy or replication) is crucial for data recovery after a failure of a database server. Consistency. The database system must do its best to ensure that no incorrect or contradictory data persist in the system. This involves the automatic verification of consistency constraints (data dependencies like primary key or foreign key constraints) and the automatic update of distributed data copies (the replicas). Non-redundancy. While physical redundancy is decisive for the reliability of a database system, duplication of values inside the stored data sets (that is, logical redundancy) should best be avoided. First of all, logical redundancy wastes space on the storage media. Moreover, data sets with logical redundancy are prone to different forms of anomalies that can lead to erroneous or inconsistent data. Normalization is one way to transform data sets into a non-redundant format. Multi-User Support. Modern database systems must support concurrent accesses by multiple users or applications. Those independent accesses should run in isolation and not interfere with each other so that a user does not notice that other users are accessing the database system at the same time. Another major issue with multi-user support is the need for access control: data of one user should be protected from unwanted accesses by other users. A simple strategy for access control is to only allow users access to certain views on the data sets. A well-defined authentication mechanism is crucial to implement access control.
A database system should manage large amounts of heterogeneous data in an efficient, persistent, reliable, consistent, non-redundant way for multiple users.
Database systems often do not satisfy all of these requirements or only to the certain extent. When choosing a database system for a specific application, clarifying all mandatory requirements and weighing the pros and cons of the available systems is the first and foremost task.
1.2 Database Components | 5
External Application
File System Stored Data
Main Memory
Database Server
...
External Application
network interface Operating System
Database Management System
Fig. 1.1. Database management system and interacting components
1.2 Database Components The software component that is in charge of all database operations is the database management system (DBMS). Several other systems and components interact with the DBMS as shown in Figure 1.1. The DBMS relies on the operating system and the file system of the database server to store the data on disk. The DBMS also relies on the operating system to be able to use the network interfaces for communication with external applications or other database servers. The low-level file system (or the operating system) does not have knowledge on internal structure or meaning of stored data, it just handles the stored data as arbitrary records. Hence, the purpose of the database management system is to provide the users with a higher-level interface and more structured data storage and retrieval operations. The DBMS operates on data in the main memory; more precisely it handles data in a particular portion of the main memory (called page buffer) that is reserved for the DBMS. The typical storage unit on disk is a “block” of data; often this data block is called a memory page. The basic procedure of loading stored data from disk into main memory consists of the following steps: 1. the DBMS retrieves a query or command accessing some page not contained in the database buffer (a page fault occurs); 2. the DBMS locates a page on disk containing some of the relevant data (possibly using indexes or “scanning” the table); 3. the DBMS copies this page into its page buffer; 4. as the page usually contains more data than needed by the query or command, the DBMS locates the relevant values (for example, certain attributes of a tuple) inside the page and processes them;
6 | 1 Background 5. 6.
if data modified are by a command, the DBMS modifies the values inside the page accordingly; the DBMS eventually writes pages containing modified values back from the page buffer onto disk.
Due to a different organization and size of main memory and disk storage, data management has to handle different kinds of addresses at which a page can be located. On disk, each page has a physical disk address that consists of the disk’s name, the cylinder number, the track number and the number of the page inside the track. Records inside the page can be addressed by an additional offset. Once a page is loaded into main memory, it receives a physical main memory address. The main memory might however be too small to hold all pages needed by an application. Virtual addresses (also called logical addresses) can be used to make accesses from inside an application independent from the actual organization of pages in memory or on disk. This indirection must be handled with a look-up table that translates a virtual address of a page into its current physical address (on disk or in main memory). Moreover, records in pages can contain references (that is, pointers that contain an address) to records in the same page or other pages. When using physical addresses for the pointers, pointer swizzling is the process of converting the disk address of a pointer into a main memory address when the referenced page is loaded into main memory. Hence, main memory management is the interface between the underlying file system and the database management system. While data storage management is the main task of a DBMS, data management as a whole involves many more complex processes. The DBMS itself consists of several subcomponents to execute theses processes; the specific implementation of these components may vary from database system to database system. Some important components are the following: Authentication Manager. Users have to provide an identification and a credential (like a user name and a password) when establishing a connection to the database. Query Parser. The query parser reads the user-supplied query string. It checks whether the query string has a valid syntax. If so, the parser breaks the query up into several commands that are needed internally to answer the query. Authorization Controller. Based on the authenticated user identity and the access privileges granted to the users by the database administrator, the authorization controller checks whether the accessing user has sufficient privileges to execute the query. Command Processor. All the subcommands (into which a user’s query is broken) are executed by the command processor. File Manager. The file manager is aware of all the resources (in particular, disk space) that the database management system may use. With the help of the file manager, the required data parts (the memory pages containing relevant data) are
1.3 Database Design | 7
located inside the database files stored on disk. When storing modified data back to disk from the main memory, the file manager finds the correct disk location for writing the data; the basic unit for memory-to-disk transfer is again a memory page. Buffer Manager. The buffer manager is in charge of loading the data into the main memory and handling the data inside the main memory buffer. From the memory pages inside the buffer it retrieves those values needed to execute the database operations. The buffer manager also initiates the writing of modified memory pages back to disk. Transaction Manager. Multiple concurrent transactions must be executed by the database system in parallel. The transaction manager takes care of a correct execution of concurrent transaction. When transaction can acquire locks on data (for exclusive access on the data), the transaction manager handles locking and unlocking of data. The transaction manager also ensures that all transactions are either committed (successfully completed) or rolled back (operations of a transaction executed so far are undone). Scheduler. The scheduler orders read and write operations (of several concurrent transactions) in such a way that the operation from different transactions are interleaved. One criterion for a good scheduler is serializability of the obtained ordering of operations; that is, a schedule that is equivalent (regarding values that are read and written by each transaction) to a non-interleaved, serial execution of the transactions. Variants of schedulers are locking schedulers (that include lock and unlock operations on the data that are read and written) and non-locking schedulers (that usually order operations depending on the start time of transactions). Recovery Manager. To prepare for the case of server failures, the recovery manager can set up periodical backup copies of the database. It may also use transaction logs to restart the database server into a consistent state after a failure.
1.3 Database Design Database design is a phase before using a database system – even before deciding which system to use. The design phase should clearly answer basic questions like: Which data are relevant for the customers or the external applications? How should these relevant data be stored in the database? Which are usual access patterns on the stored data? For conventional database systems (with a more or less fixed data schema) changing the schema on a running database system is complex and costly; that is why a good database design is essential for these systems. Nevertheless, for database systems with more flexible schemas (or no schema at all), the design phase is important, too: identifying relationships in the data, grouping data that are often accessed together, or choosing good values for row keys or column names are all ben-
8 | 1 Background eficial for a good performance of the database system. Hence, database design should be done with due care and following design criteria like the following. Completeness. All aspects of the information needed by the accessing applications should be covered. Soundness. All information aspects and relationships between different aspects should be modeled correctly. Minimality. No unnecessary or logically redundant information should be stored; in some situations however it might be beneficial to allow some form of logical redundancy to obtain a better performance. Readability. No complex encoding should be used to describe the information; instead the chosen identifiers (like row keys or column names) should be selfexplanatory. Modifiability. Changes in the structure of the stored data are likely to occur when running a database system over a long time. While for schema-free database systems these changes have to be handled by the accessing applications, for database systems with a fixed schema a “schema evolution” strategy has to be supported. Modularity. The entire data set should be divided into subsets that form logically coherent entities in order to simplify data management. A modular design is also advantageous for easy changes of the schema. There are several graphical languages for database design. We briefly review the Entity-Relationship Model (ERM) and the Unified Modeling Language (UML). We introduce the notation by using the example of a library: readers can borrow books from the library. Other modeling strategies may also be used. For example, XML document can be pictured by a tree; graph structures for graph databases can be depicted by nodes and edges each annotated with a set of properties. These modeling strategies will be deferred to later sections of the book when the respective data models (XML or graph data) are introduced.
1.3.1 Entity-Relationship Model Entity-Relationship (ER) diagrams have a long history for conceptual modeling – that is, grouping data into concepts and describing their semantics. In particular, ER models have been used in database design to specify which real-world concepts will be represented in the database system, what properties of these concepts will be stored and how different concepts related to each other. We will introduce ER modeling with the example of a library information system. The basic modeling elements of ER diagrams are: Entities. Entities represent things or beings. They can range from physical objects, over non-physical concepts to roles of persons. They are drawn as rectangles with their entity names written into the rectangle.
1.3 Database Design | 9
For our library example, we first of all need the two entities Reader and Book: Reader
Book
Relationships. Relationships describe associations between entities. Relationships are diamond-shaped with links to the entities participating in the relationship. In our example, BookLending is a relationship between readers and books:
BookLending
Reader
Book
Attributes. Attributes describe properties of entities and relationships; they carry the information that is relevant for each entity or relationship. Attributes have an oval shape and are connected to the entity or relationship they belong to. A distinction is made between single-valued, multi-valued or composite attributes. Simple single-valued attributes can have a single value for the property; for example, the title of a book: Book
Title
Multi-valued attributes can have a set of values for the property; for example, the set of authors of a book: Book
Author
Composite attributes are attributes that consist of several subattributes; for example, the publisher information of a book consists of the name of the publisher and the city where the publisher’s office is located: City Book
Publisher Name
10 | 1 Background Moreover, key attributes are those attributes the values of which serve as unique identifiers for the corresponding entity. Key attributes are indicated by underlining them. For example, an identifier for each copy of book is is a unique value issued by the library (like the library signature of the book and a counter for the different copies of a book): BookID
Book
Cardinalities. Relationships can come in different complexities. In the simplest case, these relationships are binary (that is, a relationship between two entities). Then, these binary relationships can be distinguished into 1:1, 1:n and n:m relationship: – a 1:1 relationship links an instance of one entity to exactly one instance of the other entity; an example is a marriage between two persons – a 1:n relationship links an instance of one entity to multiple instances of the other entity; for example, a book copy can only be lent to a single reader at a time, but a reader can borrow multiple books at the same time – a n:m relationship is an arbitrary relationship without any restriction on the cardinalities Such cardinalities are annotated to the relationship links in the ER diagram. In our example, we have the case of a 1:n relationship between books and readers.
BookLending 1 Reader
n Book
The Enhanced Entity-Relationship Modeling (EERM) language offers some advanced modeling elements. Most prominently, the “is-a” relationship is included in EERM to express specializations of an entity. The “is-a” relationship is depicted by a triangle pointing from the more specialized to the more general entity. For example, a novel can be a specialization of a book: Book
Novel
1.3 Database Design | 11
ReturnDate BookLending BookID 1
ReaderID
n Title
Reader
Book
Name
Year Email
Publisher
Name
Author
City
Fig. 1.2. ER diagram
Attributes of the more general entity will also be attributes of the more specialized entity; in other words, attributes are inherited by the specialized entities. The overall picture of our library example is shown in Figure 1.2. The entity Reader is identified by the key attribute ReaderID (a unique value issued by the library) and has a name and an email address as additional attributes. The entity Book is identified by the BookID, has its title and its year of publication as single-valued attributes, its list of authors as a multi-valued attribute and the publisher information as a composite attribute. Books and readers are linked by a 1:n relationship which has the return date for the book as an additional attribute.
1.3.2 Unified Modeling Language The Unified Modeling Language (UML) is a widely adopted modeling language – in particular in the object-oriented domain – and it is a standard of the Object Management Group (OMG; see Section 9.1.4). As such it cannot only model entities (also known as classes) and their relationships (also known as associations) but also other objectoriented concepts like methods, objects, activities and interactions. Web resources: – UML resource page: http://www.uml.org/ – specification: http://www.omg.org/spec/UML/
The UML standard consists of several diagram types that can each illustrate a different aspect of the modeled application. These diagrams can specify the application struc-
12 | 1 Background ture (like class diagrams, object diagrams or component diagrams) or the application behavior (like activity diagrams, use case diagrams or sequence diagrams). These diagrams can be used to model an application at different abstraction levels throughout the entire design and implementation process. From the database point of view, we will confine ourselves to the class diagram which is used to express the general structure of the stored data and is hence closely related to the Entity-Relationship diagram. We briefly review the most important notation elements. Classes, attributes and methods. Classes describe concepts or things and are hence equivalent to entities of ER Modeling. A class is drawn as a rectangle that is split into three parts. The upper part contains the class name, the middle part contains the attributes (describing state), and the lower part contains method declarations (describing behavior). The Reader class might for example contain methods to borrow and return a book (describing the behavior that a reader can have in the library) in addition to the attributes ID, name and email address (describing the state of each reader object by the values that are stored in the attributes): Reader readerID name email borrowBook() returnBook() Types and visibility. As UML is geared towards object-oriented software design, attributes, parameters and return values can also be accompanied by a type declaration. For example, while the readerID would be an integer, the other attributes would be strings; the methods have the appropriate parameters of type Book (a user-defined type) and the return value void (as we don’t expect any value to be returned by the methods). Attributes and methods can also have a visibility denoting if they can be accessed from other classes or only from within the same class. While + stands for public access without any restriction, # stands for protected access only from the same class or its subclasses, ~ stands for access from classes within the same package, and - stands for private access only from within the same class. Reader - readerID: int - name: String - email: String ~ borrowBook(b: Book): void ~ returnBook(b: Book): void
1.3 Database Design | 13
To model multi-valued attributes, a collection type (like array or list) can be used. For example, we can model the authors as a list of strings: Book bookID: int title: String year: int authors: ListhStringi Associations. Associations between classes are equivalent to relationships between entities. In the simplest case of a binary association (that is, an association between two classes), the association is drawn as a straight line between the classes. To model composite attributes, an association to a new class for the composite attribute containing the subattributes is used: Book bookID title year authors
Publisher name city
In more complex cases – for instance, when the association should have additional attributes, or when an association links more than two classes – an association class must be attached to the association. In the library example, we need an explicit association class to model the return date: ... ...
Reader
...
Book
BookLending returnDate Advanced cases like directed associations, aggregation or composition may also be used to express different semantics of an association. These kinds of associations have their own notational elements. Multiplicities. Similar to the cardinalities in ERM, we can specify complexities of an association. These multiplicities are annotated on the endpoints of the association. In general, arbitrary sequences or ranges of integers are allowed; a special symbol is the asterisk * which stands for an arbitrary number. Again, we model the association between books in such a way that a book can only be lent to a single reader at a time,
14 | 1 Background but a reader can borrow multiple books at the same time: ... ...
Reader
1
*
...
Book
Specialization. A specialization in UML is depicted by a triangular arrow tip pointing from the subclass to the superclass. A subclass inherits from a superclass all attributes and all method definition; however, a subclass is free to override the inherited methods. Book bookID title year authors
Novel
Interfaces and implementation. Interfaces prescribe attributes and methods for the classes implemeting them; methods can however only be declared in the interface but must be defined in the implementing classes. Interfaces have their name written in italic (and optionally have the stereotype interface written above their interface name). The implementing classes are connected to it by a dashed line with a triangular arrow tip. For example, the reader class may implement a Person class with a name attribute:
interface
Person name
Reader readerID email borrowBook() returnBook()
The overall UML class diagram in Figure 1.3 is equivalent to the previous ER diagram for our library example. UML is particularly important for the design of object databases (that directly store objects out of an object-oriented program). But due to the widespread use of UML in software engineering, it also suggests itself as a general-purpose database design language.
1.4 Bibliographic Notes A wealth of text books is available on the principles of database management systems and data modeling. Profound text books with a focus on relational database man-
1.4 Bibliographic Notes | 15
Reader readerID name email borrowBook() returnBook()
1
Book * bookID title year authors
*
1
Publisher name city
BookLending returnDate Fig. 1.3. UML diagram
agement systems include the books by Jukic [Juk13], Connolly and Begg [CB09] and Garcia-Molina, Ullman and Widom [GMUW08]. ER diagrams have a long history for the design of relational databases and the ER model has been unified by Chen in his influential article [Che76]. With a focus on the theory of information system design, Olivé [Oli07] provides a row of UML examples; whereas Halpin and Morgan [HM10] cover conceptual modeling for relational databases with both ER and UML diagrams. For a profound background on UML refer to the text books by Booch, Rumbaugh and Jacobson [BRJ05] and Larman [Lar05]. Last but not least, a general introduction to requirements engineering can be found in the text book by van Lamsweerde [vL09].
2 Relational Database Management Systems The relational data model is based on the concept of storing records of data as rows inside tables. Each row represents an entity of the real world with table columns being attributes (or properties) of interest of these entities. The relational data model has been the predominant data model of database systems for several decades. Relational Database Management Systems (RDBMSs) have been a commercial success since the 1980s. There are powerful systems on the market with lots of functionalities. These systems also fulfill all the basic requirements for database systems as introduced in Section 1.1. In the following sections, we briefly review the main concepts and terminology of the relational data model.
2.1 Relational Data Model The relational data model is based on some theoretical notions which will briefly be introduced in the following section. Afterwards we present a way to map an ER model to a database schema.
2.1.1 Database and Relation Schemas A relational database consists of a set of tables. Each table has a predefined name (the relation symbol) and a set of predefined column names (the attribute names). Each attribute A i ranges over a predefined domain dom(A i ) such that the values in the column (of attribute A i ) can only come from this domain. A table is then filled row-wise with values that represent the state of an entity; that is, the rows are tuples of values that adhere to the predefined attribute domains as shown in Table 2.1. Each table hence corresponds to the mathematical notion of a relation in the sense that the set of tuples in a relation are a subset of the cartesian product of the attribute domains: if r is the set of tuples in a table, then r ⊆ dom(A1 ) × ... × dom(A n ). The definition of the attribute names A i for the relation symbol R is called a relation schema; the set of the relation schemas of all relation symbols in the database is then called a database schema. That is, with the database schema we define which Table 2.1. A relational table
Relation Symbol R Tuple t1 → Tuple t2 →
Attribute A1
Attribute A2
Attribute A3
value value
value value
value value
18 | 2 Relational Database Management Systems tables will be created in the database; and with each relation schema we define which attributes are stored in each table. In addition to the mere attribute definitions, each relation schema can have intrarelational constraints, and the database schema can have interrelational constraints. These constraints describe which dependencies between the stored data exist; intrarelational constraints describe dependencies inside a single table, whereas interrelational constraints describe dependencies between different tables. Database constraints can be used to verify whether the data inserted into the table are semantically correct. Intrarelational constraints can for example be functional dependencies – and in particular key constraints: the key attributes BookID and ReaderID in our ER diagram will be keys in the corresponding database tables and hence serve as unique identifiers for books and readers. Interrelational constraints can for example be inclusion dependencies – and in particular foreign keyindex constraints: when using the ID of a book in another table (for example a table for all the book lendings), we must make sure that the ID is included in the Book table; in other words, readers can only borrow books that are already registered in the Book table. Written more formally, we define a table by assigning to its relation symbol R i the set of its attributes A ij and the set Σ i of its intrarelational constraints. Formal specification of a relation schema with intrarelational constraints: R i = ({A i1 . . . A im }, Σ i )
A database schema then consists of a database name D that consists of a set of relation schemas R i and a set Σ of interrelational constraints. Formal specification of a database schema with interrelational constraints: D = ({R1 . . . R n }, Σ)
2.1.2 Mapping ER Models to Schemas With some simple steps, an ER diagram can be translated into a database schema: Each entity name corresponds to a relation symbol. In our example, the entity Book is mapped to the relation symbol Book. Entity attributes correspond to relation attributes. In our example, the entity attributes BookID and Title will also be attributes in the relation Book; hence they will be in the relation schema of relation Book. However, the relational data model does not allow multi-valued and composite attributes. In the case of multi-valued attributes, a new relation schema for each multivalued attribute created containing additional foreign keys (to connect the new relation schema to the original relation schema). In our example, the multi-valued attribute Author must be translated into a new relation BookAuthors with attributes BookID and Author and a foreign key constraint BookAuthors.BookID ⊆ Book.BookID.
2.2 Normalization | 19
Composite attributes (like Publisher) should usually be treated as single-valued attributes. We have two options for doing this: – by combining their subattributes into one value; – or by only storing the subattributes (like City and Name) and disregarding the composite attribute (like Publisher) altogether. Relationships are also translated into a relation schema; for example, we have a BookLending relation in our database with the attribute ReturnDate. In order to be able to map the values from the entities connected by the relationship together, the relation also contains the key attributes of the entities participating in the relationship. That is why the BookLending relation also has a BookID attribute and a ReaderID attribute with foreign key constraints on them. Note that this is the most general case of mapping an arbitrary relationship; in more simple cases (like a 1:1 relationship) we might also simply add the primary key of one entity as a foreign key to the other entity. What we see in the end is that we can indeed easily map the conceptual model (the ER diagram) for our library example into a relational database schema. The definitions of the database schema and relation schemas are as follows:
Database schema: Library = ({Book, BookAuthors, Reader, BookLending}, {BookAuthors.BookID ⊆ Book.BookID,
BookLending.BookID ⊆ Book.BookID, BookLending.ReaderID ⊆ Reader.ReaderID}) Relation schemas: Book = ({BookID, Title, Year}, {BookID → Title, Year}) BookAuthors = ({BookID, Author}, {}) Reader = ({ReaderID, Name, Email}, {ReaderID → Name, Email}) BookLending = ({BookID, ReaderID, ReturnDate}, {BookID, ReaderID → ReturnDate})
2.2 Normalization Some database designs are problematic – for example, if tables contain too many attributes, or tables combine the “wrong” attributes, or tables store data duplicates (that is, when we have logical redundancy). Such problematic database design entail problems when inserting, deleting or updating values: these problems are known as anomalies. Different types of anomalies exist:
20 | 2 Relational Database Management Systems Table 2.2. Unnormalized relational table
Library
BookID
Title
ReaderID
Name
ReturnDate
1002 1004 1006
Introduction to DBS Algorithms Operating Systems
205 207 205
Peter Laura Peter
25-10-2016 31-10-2016 27-10-2016
Insertion anomaly: we need all attribute values before inserting a tuple (but some may still be unknown) Deletion anomaly: when deleting a tuple, information is lost that we still need in the database Update anomaly: When data are stored redundantly, values have to be changed in more than one tuple (or even in more than one table)
Normalization results in a good distribution of the attributes among the tables and hence normalization helps reduce anomalies.
The normalization steps depend on database constraints (in particular, functional dependencies) in the data tables. For example, to obtain the so-called third normal form (3NF) we have to remove all transitive functional dependencies from the tables. We don’t go into detail here but discuss normalization only with our library example. Assume we would not have the information on books, readers, and book lendings in separate tables, but all the data would be stored in one big single table together (for sake of simplicity, we leave out the author information altogether). For two readers and three books we would have a single table as shown in Table 2.2. What we see is, that the more books a reader has currently borrowed the more often his name appears in the table; and if we only want to change the information belonging to a certain book, we would still have to read the whole row which also contains information on the reader. Due to these considerations, it is commonly agreed that it is advantageous to store data in different tables and link them with foreign key constraints (according to the schema developed in Section 2.1). A normalized version of the Library table (in 3NF) hence looks as shown in Table 2.3.
2.3 Referential Integrity We have seen above that foreign key constraints are a special case of interrelational constraints. Referential integrity means that values of the attributes that belong to the foreign key indeed exist as values of the primary key in the referenced table – if there is more than one option to choose a key, it suffices that the referenced attributes are a
2.3 Referential Integrity | 21
Table 2.3. Normalized relational table
Book
BookID
Title
1002 1004 1006
Introduction to DBS Algorithms Operating Systems
BookLending
Reader
BookID
ReaderID
ReturnDate
1002 1006 1004
205 205 207
25-10-2016 27-10-2016 31-10-2016
ReaderID
Name
205 207
Peter Laura
candidate key. That is, in the referenced table, there must be some tuple to which the foreign key belongs. In our example we stated the requirement that the BookID and the ReaderID in the BookLending table indeed exist in the Book and Reader table, respectively. We can optionally allow that the value of the foreign key is NULL (that is, NULL in all the attributes that the foreign key is composed of). Referential integrity must be ensured when inserting or updating tuples in the referencing table; but also deleting tuples from the referenced table as well as updating the primary key (or candidate key) in the referenced table affects referential integrity. We will discuss these cases with our library example: Insert tuple into referencing table: Whenever we insert a tuple in a table that has foreign key attributes, we must make sure that the values inserted into the foreign key attributes are equal to values contained in the referenced primary key or candidate key. Update tuple in referencing table: The same applies when values of foreign keys are updated. Update referenced key: Whenever the referenced primary key (or candidate key) is modified, all referencing foreign keys must also be updated. When the referencing foreign keys are referenced themselves by some other tuple, this referencing tuple must also be updated; this is called a cascading update. Delete tuple in referenced table: Deleting a tuple can violate referential integrity whenever there are other tuples the foreign keys of which reference the primary (or candidate) key of the deleted tuple. We could then either disallow the deletion of a referenced tuple or impose a cascading deletion which also deletes all referencing tuples. Alternatively, foreign keys can be set to a default value (if it is defined) or to null (if this is allowed).
22 | 2 Relational Database Management Systems
2.4 Relational Query Languages After having designed the relational database in a good way, how can data actually be inserted into the database; and after that, how can information be retrieved from the database? For data retrieval we might want to specify conditions to select relevant tuples, combine values from different tables, or restrict tables to a subset of attributes. The Structured Query Language (SQL) is the standardized language to communicate with RDBMSs; it is a standardized language for data definition, data manipulation and data querying. For example, you can create a database schema, create a table, insert data into a table, delete data from a table, and query data with the well-known declarative syntax. Some commonly used SQL statements are the following: – CREATE SCHEMA ... – CREATE TABLE ... – INSERT INTO ... VALUES ... – DELETE FROM ... WHERE ... – UPDATE ... SET ... WHERE ... – SELECT ... FROM ... WHERE ... – SELECT ... FROM ... GROUP BY ... – SELECT ... FROM ... ORDER BY ... – SELECT COUNT(*) FROM ... Other (more mathematical) ways to express queries on relational tables, would be the logic-based relational calculus, or the operator-based relational algebra. Typical relational algebra operators and examples for these are: Projection π (restricting a table to some attributes). For example, IDs of readers currently having borrowed a book: πReaderID (BookLending) Selection σ (with condition on answer tuples). For example, all books to be returned before 29-10-2016: σ ReturnDate