Introductory Data Structures and Algorithms

A course on classic "imperative" data structures and algorithms *in OCaml* at Yale-NUS College in 2019-2022.

119 87 7MB

English Pages [318] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
YSC2229: Introductory Data Structures and Algorithms
Table of Contents
Introductory Data Structures and Algorithms
Course Syllabus
Software Prerequisites
Installing OCaml Libraries and Setting up Editors
Microsoft Windows 10
Linux
macOS
FAQ & Troubleshooting
Installing and using Git
Checking your setup
Getting a project from Git
Compiling and running a project
Running utop REPL
OCaml Style Guide
File Submission Requirements
Commenting
Naming and Declarations
Indenting
Using Parentheses:
Pattern Matching
Code Factoring
Verbosity
Lecture Notes
Week 01: Introduction
Introduction
About this course
What problems are solved by algorithms?
Data structures
What is analysis of algorithms?
Testing OCaml Code
Correctness of Recursive Algorithms
Warm-up: finding a minimum in a list of integers
Reasoning about termination
Reasoning about correctness
From Recursion to Imperative Loops
Loop variants
Loop invariants
Sorting Lists via Insertion Sort
Insertion sort implementation
Correctness of sorting
Sorting invariants
Exercises
Exercise 1
Exercise 2
Exercise 3
Exercise 4
Exercise 5
Exercise 6
Exercise 7
Exercise 8
Week 02: Working with Arrays
Arrays and Operations on Them
Insertion Sort on Arrays
Tracing Insertion Sort
Insertion Sort Invariants
Termination of Insertion Sort
Selection Sort
Tracing Selection Sort
Invariants of Selection Sort
Termination of Selection Sort
Exercises
Exercise 1
Exercise 2
Exercise 3
Exercise 4
Exercise 5
Week 03: Complexity of Algorithms and Order Notation
Generating Arrays
Simple random generators
Measuring execution time
Randomised array generation and testing
Complexity of Algorithms
Order Notation
Big O-notation
Properties of Big O-notation
Little o-notation
Proofs using O-notation
Hierarchy of algorithm complexities
Complexity of sequential composition
Sums of Series and Complexities of Loops
Arithmetic series
Geometric series
Estimating a sum by an integral
Big O and function composition
Complexity of algorithms with loops
Complexity of Simple Recursive Algorithms
Complexity of computing the factorial
Method of differences
Recurrence relations
First-order recurrence relations
Inhomogeneous recurrence relations
Exercises
Exercise 1: Realistic Complexity of Laplace Expansion
Exercise 2
Exercise 3
Week 04: Divide-and-Conquer Algorithms
Searching in Arrays
Linear Search
Binary Search
Binary Search Invariant
The Main Idea of Divide-and-Conquer algorithms
Merge Sort
Merging two sorted arrays
Main sorting procedure and its invariants
Quicksort and its Variations
Partitioning an array
Partitioning in action
Sorting via partitioning
Complexity of Divide-and-Conquer Algorithms
Changing variable in recurrence relations
Complexity of Merge Sort
Complexity of Quicksort
The Master Theorem
Generalising Comparison-Based Sorting
Comparator as a parameter
A functor for sorting
Exercises
Exercise 1
Exercise 2
Exercise 3
Exercise 4
Exercise 5
Exercise 6
Exercise 7
Exercise 8
Exercise 9
Exercise 10
Week 05: Binary Heaps and Priority Queues
Printing and Validating Generic Arrays
Best-Worst Case for Comparison-Based Sorting
Sorting in Linear Time
Simple Bucket Sort
Enhanced Bucket Sort
Stability of sorting
Radix Sort
Binary Heaps
Finding a maximum in a changing array
Definition of a binary heap
Checking that an array is a heap
Maintaining Binary Heaps
“Heapifying” elements of an array
Complexity of heapify
Building a heap from an array
Heapsort
Heapsort Complexity
Evaluating Heapsort
Which sorting algorithm to choose?
Priority Queues
Creating Priority Queues
Operations on Priority Queues
Working with Priority Queues
Exercises
Exercise 1
Exercise 2
Exercise 3
Exercise 4
Exercise 5
Exercise 6
Week 06: Abstract Data Types
Equivalence Classes and Union-Find
Union-Find Structure
Working with Sets via Union-Find
Testing Union-Find
Information Hiding and Abstraction
Stacks
The Stack interface
An List-Based Stack
An Array-Based Stack
Queues
The Queue interface
An Array-Based Queue
Debugging queue implementations
Doubly Linked Lists
A queue based on doubly linked lists
Exercises
Exercise 1
Exercise 2
Midterm Project: Memory Allocation and Reclamation
Coding Assignment
Report
Week 07: Hashing-Based Data Structures
Hash-tables
Allocation by hashing keys
Operations on hash-tables
Implementing hash-tables
Hash-tables in action
Generalised Hash-Tables
OCaml’s universal hashing
Redefining hash-table signature
A framework for testing hash-tables
A simple list-based hash-table
Testing a Simple Hash-Table
A Resizable hash-table
Comparing performance of different implementations
Bloom Filters and Their Applications
High-level intuition
Bloom filter signature
Implementing a Bloom filter
Experimenting with Bloom filters
Testing Bloom Filters
Improving Simple Hash-table with a Bloom filter
Comparing performance
Week 08: Searching in Strings
Substring Search
Testing a search procedure
A very naive search
A slightly better naive search
A recursive version of the naive search
Testing naive search
Rabin-Karp Search
Recursive version of Rabin-Karp search
Comparing performance of search procedures
Knuth–Morris–Pratt Algorithm
Revisiting the naive algorithm
Returning the Interrupt Index
Relating Matched Text and the pattern
Fast-Forwarding Search using Interrupt Index
Extracting the Interrupt Index
Exploiting the Prefix Equality
Tabulating the interrupt indices
Boot-strapping the table
Comparing performance, again
Exercises
Exercise 1
Exercise 2
Exercise 3
Week 09: Backtracking and Dynamic Programming
Constraint Solving via Backtracking
Constraint Solving by Backtracking
Computing Solutions with Backtracking
Examples of CSP solved by Backtracking
N-Queens problem
Optimisation Problems and Dynamic Programming
Implementing Fibonacci numbers
Knapsack Problem
Determining the Maximal Price
Solving Knapsack Problem via Dynamic Programming
Restoring the Optimal List of Items
Week 10: Data Encoding and Compression
File Input and Output in OCaml
Reading and Writing with Channels
Copying Files
Representing Strings
Binary Encoding of Data
Writing and Reading Binary Files
Writing and Reading OCaml Strings
Compressing DNA Sequences
Run-Length Encoding
Design Considerations
Implementation
Huffman Encoding
Assigning Codes via Character Trees
Serializing Huffman Trees
Constructing Huffman tree from Frequencies
Computing Relative Frequencies
Encoding and Writing the Compressed Text
Decompression
Testing and Running Huffman Compression
Installing GraphViz
Microsoft Windows 10
Linux
Mac OS X
Week 11: Binary Search Trees
Representing Sets via Binary Search Trees
A Data Structure for Binary-Search Trees
Inserting an element into a BST
Binary-Search-Tree Invariant
Testing Tree Operations
Printing a Tree
Searching Elements
Tree Traversals
Testing Element Retrieval and Tree Traversals
More BST operations
Deleting a node from BST
BST Rotations
Week 12: Graph Algorithms
Representing Graphs
Graphs as Adjacency Lists
Reading and Printing Graphs
Rendering Graphs via GraphViz
Shortcomings of Adjacency-List graph representation
Graphs as Linked Data Structures
Switching between graph representations
Testing graph operations
Reachability and Graph Traversals
Checking Reachability in a Graph
Testing Reachability
Rendering Paths in a Graph
Depth-First Traversal
DFS and Reachability
DFS and Cycle Detection
Topological Sort
Testing Topological Sort
Single-Source Shortest Paths
Weighted Graphs
Some Properties of Paths
Representing Shortest Paths
Representing Distance
Initialisation and Relaxation
Bellman-Ford Algorithm
Rendering Minimal Paths
Dijkstra’s Algorithm
Testing Shortest-Path Algorithms
Minimal Spanning Trees
Representing Undirected Graphs
Trees in Undirected Connected Graphs
Minimal Spanning Trees
Kruskal’s Algorithm
Testing MST Construction
Other MST Algorithms
Exercises
Exercise 1
Exercise 2
Week 13: Elements of Computational Geometry
Basics of Computational Geometry
Working with graphics in OCaml
Points, Segments and their Properties
On precision and epsilon-equality
Points on a two-dimensional plane
Points as vectors
Scalar product of vectors
Polar coordinate system
Vector product and its properties
Segments on a plane
Generating random points on a segment
Collinearity of segments
Checking for intersections
Finding intersections
Working with Polygons
Encoding and rendering polygons
Some useful polygons
Basic polygon manipulations
Queries about polygons
Intermezzo: rays and intersections
Point within an polygon
Convex Hulls
Plane-sweeping algorithm
Graham scan invariant
Final Project: Vroomba Programming
Coding Assignment
Report
Slides and Supplementary Materials
Examples and Code
Textbooks
Recommend Papers

Introductory Data Structures and Algorithms

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

YSC2229: Introductory Data Structures and Algorithms Version

2

Table of Contents 1. Course Syllabus

6

2. Software Prerequisites

8

3. OCaml Style Guide

21

1. Week 01: Introduction

33

• • • • • •

1.1. Introduction

33

1.2. Testing OCaml Code

36

1.3. Correctness of Recursive Algorithms

37

1.4. From Recursion to Imperative Loops

43

1.5. Sorting Lists via Insertion Sort

45

1.6. Exercises

49

2. Week 02: Working with Arrays

• • • •

2.1. Arrays and Operations on Them

51

2.2. Insertion Sort on Arrays

54

2.3. Selection Sort

58

2.4. Exercises

49

3. Week 03: Complexity of Algorithms and Order Notation

• • • • • •

62

3.1. Generating Arrays

63

3.2. Complexity of Algorithms

66

3.3. Order Notation

67

3.4. Sums of Series and Complexities of Loops

72

3.5. Complexity of Simple Recursive Algorithms

76

3.6. Exercises

49

4. Week 04: Divide-and-Conquer Algorithms

• • • • • •

51

80

4.1. Searching in Arrays

81

4.2. Merge Sort

85

4.3. Quicksort and its Variations

87

4.4. Complexity of Divide-and-Conquer Algorithms

90

4.5. Generalising Comparison-Based Sorting

96

4.6. Exercises

49

3

5. Week 05: Binary Heaps and Priority Queues

• • • • • • • •

5.1. Printing and Validating Generic Arrays

101

5.2. Best-Worst Case for Comparison-Based Sorting

103

5.3. Sorting in Linear Time

104

5.4. Binary Heaps

109

5.5. Maintaining Binary Heaps

115

5.6. Heapsort

118

5.7. Priority Queues

121

5.8. Exercises

49

6. Week 06: Abstract Data Types

• • • • •

127

6.2. Information Hiding and Abstraction

131

6.3. Stacks

131

6.4. Queues

136

6.5. Exercises

7.2. Report

149

149

8.2. Generalised Hash-Tables

153

8.3. Bloom Filters and Their Applications

163

171

9.1. Substring Search

171

9.2. Rabin-Karp Search

175

9.3. Knuth–Morris–Pratt Algorithm

178

9.4. Exercises

49

188

10.1. Constraint Solving via Backtracking

188

10.2. Optimisation Problems and Dynamic Programming

192

11. Week 10: Data Encoding and Compression

• •

149

8.1. Hash-tables

10. Week 09: Backtracking and Dynamic Programming

• •

146 147

9. Week 08: Searching in Strings

• • • •

49

7.1. Coding Assignment

8. Week 07: Hashing-Based Data Structures

• • •

127

6.1. Equivalence Classes and Union-Find

7. Midterm Project: Memory Allocation and Reclamation

• •

101

197

11.1. File Input and Output in OCaml

197

11.2. Binary Encoding of Data

200

4

• • •

11.3. Run-Length Encoding

206

11.4. Huffman Encoding

209

11.5. Installing GraphViz

216

12. Week 11: Binary Search Trees



12.1. Representing Sets via Binary Search Trees

218 218

13. Week 12: Graph Algorithms

233

13.1. Representing Graphs

233

13.2. Graphs as Linked Data Structures

239

13.3. Reachability and Graph Traversals

247

13.4. Single-Source Shortest Paths

260

13.5. Minimal Spanning Trees

273

• • • • • •

13.6. Exercises

14. Week 13: Elements of Computational Geometry

• • • •

280

14.1. Basics of Computational Geometry

280

14.2. Points, Segments and their Properties

282

14.3. Working with Polygons

296

14.4. Convex Hulls

307

15. Final Project: Vroomba Programming

• •

49

312

15.1. Coding Assignment

147

15.2. Report

149

5

Introductory Data Structures and Algorithms This course on classic imperative data structures and algorithms has been offered by Ilya Sergey at Yale-NUS College in 2019-2021. Feel free to use all the materials and submit pull requests on GitHub (the links to the sources are given below). Please, get in touch is you’d like to get access to the homework assignments or to the model solutions for the midterm/final projects.

Course Syllabus

6

Course Syllabus The list of topics below is a subject of possible minor changes. Week 01 • Introduction • Testing OCaml Code • Correctness of Recursive Algorithms • From Recursion to Imperative Loops • Sorting Lists via Insertion Sort Week 02 • Arrays and Operations on Them • Insertion Sort and Selection Sort on Arrays • Complexity of Algorithms and Order Notation • Sums of Series and Complexities of Loops Week 03 • Complexity of Simple Recursive Algorithms • Generating Arrays • Searching in Arrays • Merge Sort Week 04 • Quicksort and its Variations • Complexity of Divide-and-Conquer Algorithms • Generalising Comparison-Based Sorting • Best-Worst Case for Comparison-Based Sorting • Sorting in Linear Time Week 05 • Printing and Validating Generic Arrays

Course Syllabus

7 • Binary Heaps • Maintaining Binary Heaps • Heapsort and Priority Queues Week 06 • Abstract Data Types • Stacks and Queues • Testing Abstract Data Types Mid-Term Project Week 07 • Hash-tables • Scalable Hash-Tables • Bloom Filters and Their Applications Week 08 • Substring Search • Rabin-Karp Search • Knuth–Morris–Pratt Algorithm Week 09 • Equivalance Classes and Union-Find • Constraint Solving via Backtracking • Optimisation Problems and Dynamic Programming Week 10 • File Input and Output in OCaml • Binary Encoding of Data • Run-Length Encoding • Huffman Encoding Week 11 • Representing Sets via Binary Search Trees • Representing Graphs

Course Syllabus

8 Week 12 • Reachability and Graph Traversals • Single-Source Shortest Paths • Minimal Spanning Trees Week 13 • Basics of Computational Geometry • Points, Segments and their Properties • Working with Polygons • Convex Hulls Final Project

9

Software Prerequisites

Software Prerequisites In this course, we will be using OCaml as a main programming language. We will be working with multi-file projects, which will make use of various external libraries, and involve automated testing and graphics. Because of this, you will have to install a number of software utilities, which you did not need in the Intro to Computer Science class. This document provides detailed instructions on how to do so depending on the operating system you use. Please, allocate at least 3 hours for going through this setup document, as some of the software packages listed required for our class will take quite a while to install.

Installing OCaml Libraries and Setting up Editors First, we need to install all the software necessary for fully fledged OCaml development. The instructions on how to do so are provided below. If you followed the instructions, but something is not working as it should, check out the Troubleshooting section at the end of this page.

Microsoft Windows 10 Unfortunately, the OCaml infrastructure is not supported well on Windows (natively), therefore developing large multi-file projects in it is problematic. To circumvent this issue, we will be running OCaml and the related software using Windows Subsystem for Linux , a utility that allows to run a distribution of Linux within your Windows 10 system. This setup takes a large number of steps, but once you’re done with it, you’ll have a premium support for OCaml, and also a fully functional Linux distribution installed on your machine. 1. First, let us enable WSL and install a Linux distribution. The detailed steps are given in this online tutorial . Don’t forget the password for the Linux account you’ve just created: you will need it to install some software. At the end of this step, you should be able to run a “barebone” Ubuntu Linux terminal as an application within your Windows system. In my case, it looks as follows.

Software Prerequisites

10

1. You

can

access

your Windows home folder from WSL Linux via tha path /mnt/c/Users/YOURNAME/ where YOURNAME is your Windows user name. It is convenient to make

a symbolic link for it, so you could access it quickly, for instance, calling it home . This is how you create such a link: cd ~ ln -s /mnt/c/Users/YOURNAME/ home Now you can navigate to you Windows home folder via cd ~/home and to your Linux home folder via cd ~ . 1. Next, we need to install a graphical shell for the Linux distribution running in WSL. This article provides detailed instructions on how to do so. Here are some comments: ◦ You don’t have to install Firefox in WSL Linux, as you can use your Windows browser instead. ◦ The required graphical XServer shell (run separately from Windows) can be downloaded from this resource . ◦ To run Linux with the graphical mode, you will always have to first run the XServer, and then the Ubuntu shell, in which you will have to type xfce4-session . The Ubuntu window will have to stay running as long as you use Linux. 2. If the Linux image appears to be somewhat “blurred”, here’s how to fix it: ◦ First, add the following lines at the end of the file ~/.bashrc in your Linux home folder:

Software Prerequisites

11

export GDK_SCALE=0.5 export GDK_DPI_SCALE=2 This can be done using the nano editor, similarly to how it is done in this tutorial . ◦ Next, close any running instance of that X server (VcxSrv). Open the folder where you have installed it (e.g., C:\Program Files\VcXsrv ), right click on vcxsrv.exe . Choose Properties > Compatibility tab > Change high DPI settings > Enable Override high DPI scaling and change it to Application option. Here is the screenshot looks like after changing the settings:

1. Once you have done all of this, you can run Linux terminal within the graphical XFCE shell and execute all commands from it, rather than from a Windows-started Ubuntu terminal. In my case, it looks as follows:

Software Prerequisites

12

1. It’s time to install OCaml libraries. First, we need to install a number of Linux packages that OCaml needs. Run the following lines from Linux terminal (it can be done both from within graphical shell, or from within a separate Ubuntu terminal run as a Windows applications): sudo apt install make m4 gcc pkg-config libx11-dev Don’t forget to enter the password you’ve created for your Linux account, it might be different from your Windows one. Be patient: installing those packages will take quite some time. 2. Next, we will install the opam package manager for working with different OCaml libraries. Execute the following lines from Linux terminal: sudo sudo opam eval opam

add-apt-repository ppa:avsm/ppa apt install opam init -y --compiler=4.10.0 --disable-sandboxing $(opam env) install -y dune core batteries utop graphics merlin ocp-indent

Once done, add the following line to your ~/.bashrc file: eval $(opam env) After that, close your terminal window and start a new one.

13

Software Prerequisites

To check that your OCaml is correctly installed, run ocamlc --version from the terminal. You should get the output 4.10.0 , which is the version of the OCaml compiler we have just installed. 3. We recommend you to use VSCode for your OCaml your development, assuming you’ve done steps 1-6. Start by installing the Remote-WSL plugin. It is the one suggested the first time you run VSCode. Alternatively, you can install it by pressing Ctrl-Shift-P , typing install extensions , and choosing Install Extensions item from the dropdown menu, and then finding and installing the Remote-WSL extension. After installing that extension, press Ctrl-Shift-P and choose Remote-WSL: New Window . This will take a few seconds and will start a new window of VSCode that runs inside your WSL Linux (you can even start a Linux terminal there). Next, in this remote window , install the exntesion “OCaml and Reason IDE” in the same way as described above. Now, you can open an OCaml file ( Ctrl-Shift-P , followed by “File: Open File”) and enjoy the advanced features: highlighting, code completion, and type information, as well as many others. An example of the UI is shown below. Notice the indicators at the bottom of the screen, showing that VSCode runs in WSL (Ubuntu), with OCaml/merlin support enabled:

14

Software Prerequisites

Linux If you’re using Linux, the setup is similar to the one for Windows 10 WSL described previously. Just follow the points above starting from the step 5. If you’re using a distribution different from Ubuntu, make sure to use the corresponding package manager (instead of apt ) to get the system packages in the step 5. If you wish to use VSCode, just follow the instructions in step 12 for Windows 10 WSL, skipping the part about Remote-WSL and remote window and starting from installing the “OCaml and Reason IDE” extension.

macOS OCaml is well-supported in macOS, so the installation process is fairly straightforward. 1. Install the Homebrew package manager for macOS.

Software Prerequisites

15 2. Install the following system packages using Homebrew: brew install make m4 gcc pkg-config

3. Install the XQuartz X window system for macOS. Make sure it before you install opam and all libraries from it. We will need this library for a few graphical applications at the end of this course. Once you have done it, log-out from the system and log-in again . 4. Next, we will install the opam package manager for installing and maintaining different OCaml libraries. Execute the following lines from the terminal: brew opam eval opam

install opam init -y --compiler=4.10.0 $(opam env) install -y dune core batteries utop graphics merlin ocp-indent

Once done, add the following line to your ~/.bashrc or ~/.profile files (if they exist, otherwise create ~/.bashrc ): eval $(opam env) After that, close your terminal window and start a new one. Notice that if you had some opam installation before completing step 4, the installation of the graphics

package will fail. To avoid it, please, run this line first before installing the listed

above packages: opam switch reinstall 4.10.0 To check that your OCaml is correctly installed, run ocamlc --version from the terminal. You should get the output 4.10.0 , which is the version of the OCaml compiler we have just installed. 5. We suggest you use VSCode for OCaml development. To do so, after downloading and installing the VSCode IDE, you you will need to install the OCaml and Reason IDE extension , which enables OCaml support in VSCode (assuming you have installed all libraries above via opam in the step 3). You can install the extension by pressing Install Extensions

Command-Shift-P

, and choosing that item from the dropdown menu.

Now, if you open an OCaml file, it will look like that:

, typing

Software Prerequisites

16

FAQ & Troubleshooting 1. Question : May I use Emacs for programming in OCaml? Answer : Of course, you can! This tutorial used to have notes on how to configure Emacs/ Aquamacs for OCaml, but the experience of the last few years has confincingly demonstrated that VSCode is a much simpler and more convenient to use alternative, so why don’t you give it a try? 2. Problem : In-line tests are highlighed red in my editor with an error message ppx_inline_test: extension is disabled because the tests would be ignored ... . Solution : This is a bug in a certain version of the tests. To fix it, install a fixed version of the testing liberary as follows: opam install -y ppx_inline_test.v0.14.0 Then, in your project, run make clean; make . After that, the error in the editor should be gone. 3. Problem : Merlin is not detected by VSCode, which gives an error “ ocamlmerlin is not found”. Solution : This is the case if you didn’t add eval $(opam env) to the configuration files (e.g., ~/.bashrc

and/or ~/.profile ). Adding it and restarting VSCode should fix it.

Software Prerequisites

17

Alternatively (NOT RECOMMENDED), you can add the following lines to the settings.json file (with your account name instead of YOURNAME ). To find that file, press Command-Shift-P and choose “Preferences: Open Settings (JSON)” (to find it just type “settings” and choose the correct option): "reason.path.ocamlmerlin": "/Users/YOURNAME/.opam/4.10.0/bin/ ocamlmerlin" For example, in my case the contents of this file look as follows: { "window.zoomLevel": 2, "search.searchOnType": false, "reason.path.ocamlmerlin": "/Users/ilya/.opam/4.10.0/bin/ ocamlmerlin" } Don’t forget to save the file. 4. Problem : In VSCode, a Git icon in the vertical panel on the left keeps blinking with the “watch” symbol when being updated. Solution : Add the following line to your settings.json file: "git.showProgress": false 5. Problem : When installing tuareg mode with opam on macOS, I get an error: [ERROR] The compilation of conf-emacs failed at ... Solution : This can be solved by installing a particular version 2.0.8 of tuareg : opam install tuareg.2.0.8 Another way to fix it is to ensure that the executable emacs is in your PATH . This can be done by, e.g., installing emacs via Homebrew: brew install emacs

18

Software Prerequisites

Installing and using Git We will be using git as a version control for this course. You will have to master a small set of its commands to be able to submit your homework assignments. Command-line client for git comes as a part of standard macOS and Linux distributions, but you can install is separately via apt or brew . Please, also create yourself an account on GitHub , as you will need it to make submissions. To work with GitHub comfortably, you will need to set up your SSH keys. To do so, run the following command in your terminal (entering your email): ssh-keygen -t rsa -b 4096 -C "[email protected]" After that, run cat ~/.ssh/id_rsa.pub and copy all the text starting with ssh-rsa and ending with your email. Follow these instructions to add this text as your public SSH key to your GitHub entry. Finally, execute the following commands from terminal, providing your email address and name correspondingly: git config --global user.email "[email protected]" git config --global user.name "Your Name" These quick tutorials should be helpful in learning basic commands of working with Git: • Git basics • Git cheat sheet Don’t worry - you will have plenty of opportunity to master this knowledge during the course! Finally, please consider applying for student benefits on GitHub. This is totally free and will give allow you to make the best of your GitHub account. The instructions on how to apply can be found online .

Checking your setup To ensure that you’ve got all software installed correctly, let us retrieve, compile and run a small self-contained OCaml project.

19

Software Prerequisites

Getting a project from Git First, open this GitHub project: • https://github.com/ysc2229/ocaml-graphics-demo Click “Clone or Download” button and choose “Use SSH” as a cloning option:

Next, copy the url [email protected]:ysc2229/ocaml-graphics-demo.git to your buffer. Switch to terminal in your WSL Linux or Mac OS system, and create a folder where you’ll be storing your OCaml projects. It might be ~/projects or ~/home/projects or whatever you prefer. You can do it as follows: cd ~ mkdir projects cd projects Now run this command from the folder projects : git clone [email protected]:ysc2229/ocaml-graphics-demo.git cd ocaml-graphics-demo

Software Prerequisites

20

If prompted to answer a question, simply answer y . We have just created a local copy of the simple repository.

Compiling and running a project Let’s compile it and run the executables. Execute the following commands: make bin/demo After a few seconds (longer on macOS), you should get a window with a funny face. Feel free to play with it and close when done. You can also browse the sources of the project with Emacs.

Running utop REPL utop

is a replacement for an inferior OCaml REPL providing a richer set of features and a nicer

highlighting. Unfortunately, it cannot be used directly from Emacs with multi-file projects, but we can run it from terminal. For instance, for the project above, we can invoke utop by running: make utop Now we can load modules defined in the project (eg., GraphicsUtil ) and play with the definitions. Use Esc + Left/Right/Down arrows to navigate between auto-completion options and choose one. An example is shown in the screenshot below:

21

Fun, isn’t it? Now you’re ready to take the class.

Software Prerequisites

22

OCaml Style Guide

OCaml Style Guide One important goal in this class is to teach you how to program elegantly. You have most likely spent many years in secondary school learning style with respect to the English language – programming should be no different. Every programming language demands a particular style of programming, and forcing one language’s style upon another can have disastrous results. Of course there are some elements of writing a computer program that are shared between all languages. You should be able to pick up these elements through experience. Listed below are some style guidelines for OCaml. Egregious violation of these guidelines may result in loss of programming style points. Note that these guidelines cover many more OCaml features than we will be expecting you to use in this class. Although the list below seems daunting, most of the suggestions are common sense. Also, you should note that these rules come no where near to the style mandates you will likely come across in industry. Many companies go so far as to dictate exactly where spaces can go. Acknowledgement: Much of this style guide is adapted from CIS341 at UPenn . File Submission Requirements: 1. Code must compile 2. 80 column limit 3. No tab characters Commenting: 1. Comments go above the code they reference 2. Avoid useless comments 3. Avoid over-commenting 4. Line breaks 5. Proper multi-line commenting Naming and Declarations: 1. Use meaningful names 2. Naming conventions 3. Type annotations 4. Avoid global mutable variables 5. When to rename variables 6. Order of declarations in a structure Indentation: 1. Indent two spaces at a time 2. Indenting nested let expressions

23

OCaml Style Guide

3. Indenting match expressions 4. Indenting if expressions 5. Indenting comments Using Parentheses: 1. Parenthesize to help indentation 2. Wrap match expressions with parenthesis 3. Over parenthesizing Pattern Matching: 1. No incomplete pattern matches 2. Pattern match in the function arguments when possible 3. Function arguments should not use values for patterns 4. Avoid using too many projections 5. Pattern match with as few match expressions as necessary 6. Don't use List.hd , List.tl , or List.nth Code Factoring: 1. Don't let expressions take up multiple lines 2. Breakup large functions into smaller functions 3. Over-factoring code Verbosity: 1. Don't rewrite existing code 2. Misusing if expressions 3. Misusing match expressions 4. Other common misuses 5. Don't rewrap functions 6. Avoid computing values twice

File Submission Requirements 1. Code Must Compile: Any code you submit must compile. If it does not compile, we won't grade the project and you will lose all the points for the project. You should treat any compiler warnings as errors. 2. 80 Column Limit: No line of code should have more than 80 columns. Using more than 80 columns causes your code to wrap around to the next line which is devastating for readability. Ensuring that all your lines fall within the 80 column limit is not something you should do when you have finished programming. 3. No Tab Characters: Do not use the tab character (0x09). Instead, use spaces to control indenting. Eclipse provides good tab stops by default. The Emacs package from the OCaml website avoids using tabs (with the exception of pasting text from the clipboard or kill ring).

24

OCaml Style Guide

When in ml-mode, Emacs uses the TAB key to control indenting instead of inserting the tab character.

Commenting 1. Comments Go Above the Code They Reference: Consider the following: let sum = List.fold_left (+) 0 (* Sums a list of integers. *) (* Sums a list of integers. *) let sum = List.fold_left (+) 0 The latter is the better style, although you may find some source code that uses the first. We require that you use the latter. 1. Avoid Useless Comments: Comments that merely repeat the code it references or state the obvious are a travesty to programmers. Comments should state the invariants, the non-obvious, or any references that have more information about the code. 1. Avoid Over-commenting: Incredibly long comments are not very useful. Long comments should only appear at the top of a file -- here you should explain the overall design of the code and reference any sources that have more information about the algorithms or data structures. All other comments in the file should be as short as possible, after all brevity is the soul of wit. Most often the best place for any comment is just before a function declaration. Rarely should you need to comment within a function -- variable naming should be enough. 1. Line Breaks: Obviously the best way to stay within the 80 character limit imposed by the rule above is pressing the enter key every once and a while. Empty lines should be included between value declarations within a struct block, especially between function declarations. Often it is not necessary to have empty lines between other declarations unless you are separating the different types of declarations (such as structures, types, exceptions and values). Unless function declarations within a let block are long, there should be no empty lines within a let block. There should never be an empty line within an expression. 1. Proper Multi-line Commenting: When comments are printed on paper, the reader lacks the advantage of color highlighting performed by an editor such as Emacs. This makes it important for you to distinguish comments from code. When a comment extends beyond one line, it should be preceded with a * similar to the following: (* This is one of those rare but long comments * that need to span multiple lines because * the code is unusually complex and requires * extra explanation. *) let complicatedFunction () = ...

OCaml Style Guide

25

Naming and Declarations 1. Use Meaningful Names: Variable names should describe what they are for. Distinguishing what a variable references is best done by following a particular naming convention (see suggestion below). Variable names should be words or combinations of words. Cases where variable names can be one letter are in a short let blocks. Often it is the case that a function used in a fold, filter, or map is bound to the name f . Here is an example for short variable names: let d = Unix.localtime (Unix.time ()) in let m = d.Unix.tm_min in let s = d.Unix.tm_min in let f n = (n mod 3) = 0 in List.filter f [m;s] 1. Naming Conventions: The following are the naming guidelines that are followed by the OCaml library; try to follow similar conventions: Token

Convention

Example

Variables and functions

Symbolic or initial lower case. Use underscores for multiword names:

get_item

Constructors

Initial upper case. Use embedded caps for multiword names. Historic exceptions are true , and false . Rarely are symbolic names like :: used.

Node EmptyQueue

Types

All lower case. Use underscores for multiword names.

priority_queue

Module Types

Initial upper case. Use embedded caps for multiword names.

PriorityQueue

Modules

Same as module type convention.

PriorityQueue

Functors

Same as module type convention.

PriorityQueue

These conventions are not enforced by the compiler, though violations of the variable/ constructor conventions ought to cause warning messages because of the danger of a constructor turning into a variable when it is misspelled. 1. Type Annotations: Complex or potentially ambiguous top-level functions and values should be declared with types to aid the reader. Consider the following: let get_bit bitidx n = let shb = Int32.shift_left 1l bitidx in Int32.logand shb n = shb let get_bit (bitidx:int) (n:int32):bool = let shb = Int32.shift_left 1l bitidx in

OCaml Style Guide

26 Int32.logand shb n = shb

The latter is considered better. Such type annotations can also help significantly when debugging typechecking problems. 1. Avoid Global Mutable Variables: Mutable values should be local to closures and almost never declared as a structure's value. Making a mutable value global causes many problems. First, running code that mutates the value cannot be ensured that the value is consistent with the algorithm, as it might be modified outside the function or by a previous execution of the algorithm. Second, and more importantly, having global mutable values makes it more likely that your code is nonreentrant. Without proper knowledge of the ramifications, declaring global mutable values can extend beyond bad style to incorrect code. 1. When to Rename Variables: You should rarely need to rename values, in fact this is a sure way to obfuscate code. Renaming a value should be backed up with a very good reason. One instance where renaming a variable is common and encouraged is aliasing structures. In these cases, other structures used by functions within the current structure are aliased to one or two letter variables at the top of the struct block. This serves two purposes: it shortens the name of the structure and it documents the structures you use. Here is an example: module H = Hashtbl module L = List module A = Array ... 1. Order of Declarations in a Structure: When declaring elements in a file (or nested module) you first alias the structures you intend to use, followed by the types, followed by exceptions, and lastly list all the value declarations for the structure. Here is an example: module L = List type foo = unit exception InternalError let first list = L.nth list 0 Note that every declaration within the structure should be indented the same amount.

Indenting 1. Indent Two Spaces at a Time: Most lines that indent code should only indent by two spaces more than the previous line of code. 2. Indenting nested let expressions: Blocks of code that have nested let expressions should not be indented. Bad: let x = exp1 in let y = exp2 in x + y

OCaml Style Guide

27

Good: let x = exp1 in let y = exp2 in x + y 3. Indenting match Expressions: Indent similar to the following. match expr with | pat1 -> ... | pat2 -> ... 4. Indenting if Expressions: Indent similar to the following. if exp1 then exp2 if exp1 then else if exp3 then exp4 exp2 else if exp5 then exp6 else exp3 else exp8 if exp1 then exp2 else exp3 if exp1 then exp2 else exp3 5. Indenting Comments: Comments should be indented to the level of the line of code that follows the comment.

Using Parentheses: 1. Parenthesize to Help Indentation: Indentation algorithms are often assisted by added parenthesization. Consider the following: let x = "Long line..."^ "Another long line." let x = ("Long line..."^ "Another long line.") The latter is considered better style. 2. Wrap match Expressions with Parenthesis: This avoids a common (and confusing) error that you get when you have a nested match expression. 3. Over Parenthesizing: Parenthesis have many semantic purposes in ML, including constructing tuples, grouping sequences of side-effect expressions, forcing higher-precedence on an expression for parsing, and grouping structures for functor arguments. Clearly, the parenthesis must be used with care. You may only use parentheses when necessary or when it improves readability. Consider the following two function applications: let x = function1 (arg1) (arg2) (function2 (arg3)) (arg4) let x = function1 arg1 arg2 (function2 arg3) arg4

OCaml Style Guide

28

The latter is considered better style. Parentheses should usually not appear on a line by themselves, nor should they be the first graphical character -- parentheses do not serve the same purpose as brackets do in C or Java.

Pattern Matching 1. No Incomplete Pattern Matches: Incomplete pattern matches are flagged with compiler warnings. We strongly discourage compiler warnings when grading; thus, if there is a compiler warning, the project will get reduced style points. 2. Pattern Match in the Function Arguments When Possible: Tuples, records and datatypes can be deconstructed using pattern matching. If you simply deconstruct the function argument before you do anything useful, it is better to pattern match in the function argument. Consider these examples: Bad

Good

let f let let let ...

arg1 arg2 = x = fst arg1 in y = snd arg1 in z = fst arg2 in

let f let let let ...

arg1 = x = arg1.foo in y = arg1.bar in baz = arg1.baz in

let f (x,y) (z,_) = ...

let f {foo=x, bar=y, baz} = ...

3. Function Arguments Should Not Use Values for Patterns: You should only deconstruct values with variable names and/or wildcards in function arguments. If you want to pattern match against a specific value, use a match expression or an if expression. We include this rule because there are too many errors that can occur when you don't do this exactly right. Consider the following: let fact 0 = 1 | fact n = n * fact(n-1) let fact n = if n=0 then 1 else n * fact(n-1) The latter is considered better style. 4. Avoid Using Too Many Projections: Frequently projecting a value from a record or tuple causes your code to become unreadable. This is especially a problem with tuple projection because the value is not documented by a variable name. To prevent projections, you should use pattern matching with a function argument or a value declaration. Of course, using projections is okay

OCaml Style Guide

29

as long as it is infrequent and the meaning is clearly understood from the context. The above rule shows how to pattern match in the function arguments. Here is an example for pattern matching with value declarations. Bad

Good

let v = someFunction() in let x = fst v in let y = snd v in x+y

let x,y = someFunction() in x+y

5. Pattern Match with as Few match Expressions as Necessary: Rather than nest match expressions, you can combine them by pattern matching against a tuple. Of course, this doesn't work if one of the nested match expressions matches against a value obtained from a branch in another match expression. Nevertheless, if all the values are independent of each other you should combine the values in a tuple and match against that. Here is an example: Bad let d = Date.fromTimeLocal(Unix.time()) in match Date.month d with | Date.Jan -> (match Date.day d with | 1 -> print "Happy New Year" | _ -> ()) | Date.Jul -> (match Date.day d with | 4 -> print "Happy Independence Day" | _ -> ()) | Date.Oct -> (match Date.day d with | 10 -> print "Happy Metric Day" | _ -> ()) Good let d = Date.fromTimeLocal(Unix.time()) in match (Date.month d, Date.day d) of | (Date.Jan, 1) -> print "Happy New Year" | (Date.Jul, 4) -> print "Happy Independence Day" | (Date.Oct, 10) -> print "Happy Metric Day" | _ -> () 6. Don't use List.hd , List.tl , or List.nth : The functions hd , tl , and nth are used to deconstruct list types; however, they raise exceptions on certain inputs. You should rarely use these functions. In the case that you find it absolutely necessary to use these (something that probably won't ever happen), you should handle any exceptions that can be raised by these functions.

OCaml Style Guide

30

Code Factoring 1. Don't Let Expressions Take Up Multiple Lines: If a tuple consists of more than two or three elements, you should consider using a record instead of a tuple. Records have the advantage of placing each name on a separate line and still looking good. Constructing a tuple over multiple lines makes your code look hideous -- the expressions within the tuple construction should be extraordinarily simple. Other expressions that take up multiple lines should be done with a lot of thought. The best way to transform code that constructs expressions over multiple lines to something that has good style is to factor the code using a let expression. Consider the following: Bad fun euclid (m:int,n:int) : (int * int * int) = if n=0 then (b 1, b 0, m) else (#2 (euclid (n, m mod n)), u - (m div n) * (euclid (n, m mod n)), #3 (euclid (n, m mod n))) Good fun euclid (m:int,n:int) : (int * int * int) = if n=0 then (b 1, b 0, m) else let q = m div n in let r = n mod n in let (u,v,g) = euclid (n,r) in (v, u-(q*v), g) 2. Breakup Large Functions into Smaller Functions: One of the greatest advantages of functional programming is that it encourages writing smaller functions and combining them to solve bigger problems. Just how and when to break up functions is something that comes with experience. 3. Over-factoring code: In some situations, it's not necessary to bind the results of an expression to a variable. Consider the following: Bad letl x = TextIO.inputLine TextIO.stdIn in match x with ... Good match TextIO.inputLine TextIO.stdIn with ... Here is another example of over-factoring (provided y is not a large expression): let x = y*y in x+z y*y + z The latter is considered better.

OCaml Style Guide

31

Verbosity 1. Don't Rewrite Existing Code: The OCaml standard libraries have a great number of functions and data structures -- use them! Often students will recode List.filter , List.map , and similar functions. Another common way in which one can avoid recoding is to use the fold functions. Writing a function that recursively walks down a list can almost always make use of List.fold_left or List.fold_right . Other data structures often have similar folding functions; use them whenever they are available. 2. Misusing if Expressions: Remember that the type of the condition in an if expression is bool . In general, the type of an if expression is 'a , but in the case that the type is bool , you should not be using if at all. Consider the following: Bad

Good

if e then true else false

e

if e then false else true

not e

if beta then beta else false

beta

if not e then x else y

if e then y else x

if x then true else y

x || y

if x then y else false

x && y

if x then false else y

not x && y

if x then y else true

not x || y

3. Misusing match Expressions: The match expression is misused in two common situations. First, match should never be used in place of an if expression (that's why if exists). Note the following: match e with | true -> x | false -> y if e then x else y The latter expression is much better. Another situation where if expressions are preferred over match expressions is as follows: match e with | c -> x (* c is a constant value *) | _ -> y if e=c then x else y The latter expression is definitely better. The other misuse is using match when pattern matching with a val declaration is enough. Consider the following: letl x = match expr with (y,z) -> y let x,_ = expr

OCaml Style Guide

32

The latter is considered better. 4. Other Common Misuses: Here is a bunch of other common mistakes to watch out for: Bad

Good

l::nil

[l]

l::[]

[l]

length + 0

length

length * 1

length

big exp * same big exp

let x = big exp in x*x

if x then f a b c1 else f a b c2

f a b (if x then c1 else c2)

5. Don't Rewrap Functions: When passing a function around as an argument to another function, don't rewrap the function if it already does what you want it to. Here's an example: List.map (fun x -> sqrt x) [1.0; 4.0; 9.0; 16.0] List.map sqrt [1.0; 4.0; 9.0; 16.0] The latter is better. Another case for rewrapping a function is often associated with infix binary operators. To prevent rewrapping the binary operator, use the op keyword. Consider this example: fold_left (fun x y -> x + y) 0 fold_left (+) 0 The latter is considered better style. 6. Avoid Computing Values Twice: When computing values twice you're wasting the CPU time and making your program ugly. The best way to avoid computing things twice is to create a let expression and bind the computed value to a variable name. This has the added benefit of letting you document the purpose of the value with a variable name -- which means less commenting.

Lecture Notes

33

Lecture Notes Week 01: Introduction Introduction About this course Data represents information and computations represent data processing, i.e., obtaining new information from what we already know about the world. An algorithm is any well-defined computational procedure that takes some data value, or a set of values, as input and produces some value, or set of values, as output, always terminating with a result. An algorithm is thus a sequence of computational steps that transform the input data into the output data. In this course, we will take a look at some of the problems that can be solved by algorithms, learn how to approach challenges that require algorithmic solution, and, most important, will learn how to reason about crucial properties of algorithms: correctness, termination, and complexity.

What problems are solved by algorithms? Algorithms describe a problem in a way that it could be implemented as a computer program and solved automatically. You can think of an algorithm as of a description of a procedure that achieves that. For example, we might need to sort a sequence of numbers in a non-decreasing order. This problem arises frequently in practice and provides fertile ground for introducing many standard design techniques and analysis tools. Here is how we formally define the sorting problem: • Input : a sequence of numbers a1 , a2 , …, an . • Output : a permutation (or, reordering) of the initial sequence b1 , b2 , …, bn , such that b1 let min = walk t h in Some min | _ -> None

Reasoning about termination How do we know that find_min indeed terminates on every input? Since the only source of nontermination in functional programs is recursion, in order to argue for the termination of find_min we have to take a look at its recursive subroutine, namely walk . Let us notice that walk , whenever it calls itself recursively, always does so taking the tail t of its initial argument list xs as a new input. Therefore, every time it runs on a smaller list, and whenever it reaches the empty list [] , it simply outputs the result min . The list argument xs , or, more precisely, its size, is commonly referred to as a termination measure or variant of a recursive function. A somewhat more formal definition of variant of a recursive procedure f is a function that maps arguments of f to an integer number n , such that every recursive call to f decreases it, such that eventually it reaches some value, which corresponds to the final step of a computation, at which points the function terminates, returning the result.

Reasoning about correctness How do we know that the function is indeed correct, i.e., does what it’s supposed to do? A familiar way to probe the implementation for the presence of bugs is to give the function a specification and write some tests. The declarative specification, defined as a function in OCaml, defines m as a minimum for a list ls

, if all elements of ls are not smaller than m , and also m is indeed an element of ls :

let is_min ls m = List.for_all (fun e -> m e

Lecture Notes

39 | _ -> raise (Failure "Empty result!") let find_min_spec find_min_fun ls = let result = find_min_fun ls in ls = [] && result = None || is_min ls (get_exn result)

The specification checker find_min_spec is parameterised by both the function candidate find_min_fun

to be checked, and a list ls provided as an argument. We can now test is as

follows: # # # -

find_min_spec find_min [];; : bool = true find_min_spec find_min [1; 2; 3];; : bool = true find_min_spec find_min [31; 42; 239; 5; 100];; : bool = true

Those test cases are only meaningful if we trust that our specification find_min_spec indeed correctly describes the expected behaviour of its argument find_min_fin . In other words, to recall, the tests are only as good as the specification they check: if the specification captures a wrong property or ignores some essential relations between an input and a result of an algorithm, then such tests can make more harm than good, giving a false sense of an implementation not having any issues. What we really want to ensure is that the recursive walk function processes the lists correctly, iteratively computing the minimum amongst the list’s elements, getting closer to it with every iteration. That is, since each “step” of walk either returns the result or recomputes the minimum, for the part of the list already observed , it would be good to capture it in some form of a specification. Such a specification for an arbitrary, possibly recursive, function f x1 ... xn with arguments , …, xn can be captured by means of a precondition and a postcondition , which are boolean

x1

functions that play the following role: • A precondition P x1 ... xn describes the relation between the arguments of f right before f

is called. It is usually the duty of the client of the function (i.e., the code that calls it) to make

sure that the precondition holds whenever f is about to be called. • A postcondition Q x1 ... xn res describes the relation between the arguments of f and its result right after f returns res , being called with x1 ... xn as its arguments. It is a duty of the function implementer of f to ensure that the postcondition holds. Together the pre- and postcondition P / Q of a function are frequently referred to as its contract , specification , or invariant . Even though we will be using those notions interchangeably, contract is most commonly appears in the context of dynamic correctness checking (i.e., testing),

40

Lecture Notes

while invariant is most commonly used in the context of imperative computations, which we will see below. A function f is called correct with respect to a specification P / Q , if whenever its input satisfies P (i.e., P x1 ... xn = true ), its result res satisfies Q (i.e., Q x1 ... xn res = true) . The process of checking that an implementation of a function obeys its ascribed specification is called program verification . Indeed, any function can be given multiple specifications. For instance, both P and Q can just be constant true , trivially making the function correct. The real power of being able to ascribe and check the specifications comes from the fact that they allow to reason about correctness of the computations that employ the specified function. Let us see how it works on our find_min example. What should be the pre-/postcondition we should ascribe to walk ? That very much depends on what do we want to be true of its result. Since it’s supposed to deliver the minimum of the list ls , it seems reasonable to fix the postcondition to be as follows: let find_min_walk_post ls xs min res = is_min ls res We can even use it for annotating (via OCaml’s assert ) the body of find_min making sure that it holds once we return from the top-level call of walk . Notice, that since walk is an internal function of find_min , its postcondition also includes ls , which it uses, so it can be considered as another parameter (remember lambda-lifting?). Choosing the right precondition for walk is somewhat trickier, as it needs to assist us in showing the two following executions properties of the function being specified: • In the base case of a recursion (in case of walk , it’s the branch [] -> ... ), it trivially gives us the desired property of the result, i.e., the postcondition holds. • It can be established before the initial and the recursive call. Unfortunately, coming up with the right preconditions for given postconditions is known to be a work of art. More problematically, it cannot be automated, and the problem of finding a precondition is similar to finding good initial hypotheses for theorems in mathematics. Ironically, this is also one of the problems that itself is not possible to solve algorithmically: we cannot have an algorithm, which, given a postcondition and a function, would infer a precondition for it in a general case. Such a problem, thus is equivalent to the infamous Halting Problem , but the proof of such an equivalence is outside the scope of this course. Nevertheless, we can still try to guess a precondition, and, for most of the algorithms it is quite feasible. The trick is to look at the postcondition (i.e., find_min_walk_post in our case) as the “final” state of the computation, and try to guess, from looking at the initial and intermediate

Lecture Notes

41

stages, what is different, and who exactly the program brings us to the state captured by the postcondition, approaching it gradually as it executes its body. In the case of walk , every iteration (the case h :: t -> ... ) recomputes the minium based on the head of the current remaining list. In this it makes sure that it has the most “up-to-date” value as a minimum, such that it either is already a global minimum (but we’re not sure in it yet, as we haven’t seen the rest of the list), or the minimum is somewhere in the tail yet to be explored. This property is a reasonable precondition, which we can capture by the following predicate (i.e., a boolean function): let find_min_walk_pre ls xs min = (* xs is a suffix of ls *) is_suffix xs ls && ((* min is a global minimum, *) is_min ls min || (* or, the minimum is in the remaining tail xs *) List.exists (fun e -> e < min) xs) This definition relies on two auxiliary functions: let rec remove_first ls n = if n [] | h :: t -> remove_first t (n-1) let is_suffix xs ls = let n1 = List.length xs in let n2 = List.length ls in let diff = n2 - n1 in if diff < 0 then false else let ls_tail = remove_first ls diff in ls_tail = xs Notice the two critical components of a good precondition: • find_min_walk_pre holds before the first time we call walk from the main function’s body. • Assuming it holds at the beginning of the base case, we know it implies the desired result is_min ls min , as the second component of the disjunction List.exists (fun e -> e < min) xs

, with xs = [] becomes false .

What remains is to make sure that the precondition is satisfied at each recursive call. We can do so by annotating our program suitably with assertions (it requires small modifications in order to assert postconditions of the result):

Lecture Notes

42

let find_min_with_invariant ls = let rec walk xs min = match xs with | [] -> let res = min in (* Checking the postcondition *) assert (find_min_walk_post ls xs min res); res | h :: t -> let min' = if h < min then h else min in (* Checking the precondition of the recursive call *) assert (find_min_walk_pre ls t min'); let res = walk t min' in (* Checking the postcondition *) assert (find_min_walk_post ls xs min res); res in match ls with | h :: t -> (* Checking the precondition of the initial call *) assert (find_min_walk_pre ls t h); let res = walk t h in (* Checking the postcondition *) assert (find_min_walk_post ls t h res); Some res | _ -> None Adding the assert statements makes us enforce the pre- and postcondition: had we have guessed them wrongly, a program would crash on some inputs. For instance, we can change < to >

in the main iteration of the walk , and it will crash. We can now run now invariant-annotated

program as before ensuring that on all provided test inputs it doesn’t crash and returns the expected results. Why would the assertion right before the recursive call to walk crash, should we change < to > ? Let us notice that the way min' is computed, it is “adapted” for the updated state, in which the recursive call is made: specifically, it accounts for the fact that h might have been the new global minimum of ls — something that would have been done wrongly with an opposite comparison. Once we have checked the annotation function, we known that on those test inputs, not only we get the right answers (which could be a sheer luck), but also at every internal computation step, the main worker function walk maintains a consistent invariant (i.e., satisfies its pre/ postconditions), thus, keeping the computation “on track” towards the correct outcome. Does this mean that the function is correct with respect to its invariant? Unfortunately, even though adding intermediate assertions gave us stronger confidence in this, the only tool we have at our disposal are still only tests. In order to gain the full confidence in the function’s

Lecture Notes

43

correctness, we would have to use a tool, such as Coq . Having pre-/postconditions would also be very helpful in that case, as they would specify precisely the induction hypothesis for our correctness proof. However, those techniques are explained in a course on Functional Programming and Proving, and we will not be covering them here.

From Recursion to Imperative Loops • File: Loops.ml The way the auxiliary function walk function find_min has been implemented is known as tailcall-recursion : each recursive call happens to be the very last thing the function does in a nonbase (recursive) case. Due to this structure, which leaves “nothing to do” after the recursive call, a tail-recursive function can be transformed to an imperative while -loop. The benefits of such transformation is the possibility not to use the Call Stack , necessary to stall the calling structure of the program, but rather keep the computation “flat”. The transformation of a tail-recursive program into a program that uses the while -loop happens in two phases: • Make the parameters of the functions mutable references, so they could be re-assigned at each loop iteration. • Make the branch-condition of “base” case to be that of the while -loop. Whatever postprocessing of the result takes place in the base cases, should now be done after the loop. The result of transforming find_min into a loop is as follows: let find_min_loop ls = let loop cur_tail cur_min = while !cur_tail [] do let xs = !cur_tail in let h = List.hd xs in let min = !cur_min in cur_min := if h < min then h else min; cur_tail := List.tl xs done; !cur_min in match ls with | h :: t -> let cur_tail = ref t in let cur_min = ref h in let min = loop cur_tail cur_min in

Lecture Notes

44 Some min | _ -> None

Notice that the function walk has been renamed loop , which is no longer recursive: the tail recursion has been “unfolded” into the loop, and pattern-matching has been replaced by the loop condition !cur_tail [] . Furthermore, all parameters are now just references that are being reassigned at each loop iteration. An important observation is that reassigning the mutable variables in an imperative implementation is equivalent to passing new arguments in the corresponding recursive implementation. Knowing that makes it easy to “switch” between loop-based imperative and tailrecursive functional implementations.

Loop variants The function find_min_loop still terminates. The main source of non-termination in imperative programs, in addition to recursion, are loops. However, we can reason about the loop termination in the same way we did for recursive programs: by means of finding a loop variant (i.e., termination measure), expressed as a function of values stored in variables, affected by the loop iteration. In the case of loop above the loop variant is the size of a list stored in the variable cur_tail , which keeps decreasing, leading to the loop termination when the it becomes zero.

Loop invariants Now, as we have a program with a loop, can we use the same methodology to ensure its correctness using pre- and postconditions? The answer is yes, and, in fact, we are only going to need the definitions that we already have. The precondition of what used to be walk and is now loop becomes loop invariant , which serves exactly the same purpose as the precondition of a recursive version. Specifically, it should • be true before and after each iteration; • when conjoined with the loop condition, allow for establishing the property of the loop-affected state, implying the client-imposed specification. Notice that the first quality of the loop invariant is the same as of the precondition. The fact that it must hold not just at the beginning, but also at the end of each iteration is because in a loop, a new iteration begins right after the previous one ends, and hence it expects its “precondition”/”invariant” to hold. The second quality corresponds to the intuition that the invariant/precondition should be chosen in a way that when a loop terminates (or, equivalently, a recursive function returns), the invariant allows to infer the postcondition.

Lecture Notes

45

All that said, for our imperative version of finding a minimum, we can use find_min_walk_pre as the loop invariant, annotating the program as follows: let find_min_loop_inv ls = let loop cur_tail cur_min = (* The invariant holds at the beginning of the loop *) assert (find_min_walk_pre ls !cur_tail !cur_min); while !cur_tail [] do let xs = !cur_tail in let h = List.hd xs in let min = !cur_min in cur_min := if h < min then h else min; cur_tail := List.tl xs; (* The invariant holds at the end of the iteration *) assert (find_min_walk_pre ls !cur_tail !cur_min); done; !cur_min in match ls with | h :: t -> let cur_tail = ref t in let cur_min = ref h in (* The invariant holds at the beginning of the loop *) assert (find_min_walk_pre ls !cur_tail !cur_min); let min = loop cur_tail cur_min in (* Upon finishing the loop, the invariant implies the postcondition. *) assert (find_min_walk_post ls !cur_tail !cur_min min); Some min | _ -> None

Sorting Lists via Insertion Sort • File: InsertSort.ml The task of finding the minimal and the second-minimal element in a list can be made much simpler and fast, if the list is pre-processed , namely, sorted. Indeed, for a sorted list we can just take it first or a second element, knowing that it will be what we need. Below, we will see our first implementation of sorting a list.

Insertion sort implementation The following OCaml code implements the sorting procedure:

46

Lecture Notes

let insert_sort ls = let rec walk xs acc = match xs with | [] -> acc | h :: t -> let rec insert elem prefix = match prefix with | [] -> [elem] | h :: t as l -> if h < elem then h :: (insert elem t) else (elem :: l) in let acc' = insert h acc in walk t acc' in walk ls [] Notice that there are two recursive auxiliary function in it: walk and insert . They play the following roles: • The outer walk traverses the entire lists and for each next element, inserts it at a correct position to the prefix via insert , which is already assumed ordered. • The inner insert traverses the sorted prefix (called prefix ) and inserts an element elem to a correct position.

Correctness of sorting In order to reason about the correctness of sorting, we first need to say what its specification is, i.e., what is a correctly sorted list. This notion is described by the following definition: let rec sorted ls = match ls with | [] -> true | h :: t -> List.for_all (fun e -> e >= h) t && sorted t A list res is a correctly sorted version of a list ls if it’s (a) sorted and (b) has all the same elements as res , which we can define as follows: let same_elems ls1 ls2 = List.for_all (fun e -> List.find_all (fun e' -> e = e') ls2 = List.find_all (fun e' -> e = e') ls1 ) ls1 && List.for_all (fun e ->

47

Lecture Notes

List.find_all (fun e' -> e = e') ls2 = List.find_all (fun e' -> e = e') ls1 ) ls2 let sorted_spec ls res = same_elems ls res && sorted res With the following functions we can now test insertion sort: let sort_test sorter ls = let res = sorter ls in sorted_spec ls res;; # insert_sort [];; - : 'a list = [] # sort_test insert_sort [];; - : bool = true # insert_sort [5; 7; 8; 42; 3; 3; 1];; - : int list = [1; 3; 3; 5; 7; 8; 42] # sort_test insert_sort [5; 7; 8; 42; 3; 3; 1];; - : bool = true

Sorting invariants Let us now make the intuition about the correctness of sorting formal, capturing it in the form of specifications for the two recursive functions it uses, walk and insert . Since walk is tailrecursive, we can get away without its postcondition, and just specify the precondition, which is also its invariant: let insert_sort_walk_inv ls t acc = sorted acc && same_elems (acc @ t) ls The invariant insert_sort_walk_inv ensures that the prefix acc processed so far is sorted, and also that the concatenation of the tail t to be processed has the same elements as the original list ls . The recursive procedure insert is, unfortunately, not tail-recursive, hence we will have to provide both the pre- and the postcondition: let insert_sort_insert_pre elem prefix = sorted prefix let insert_sort_insert_post res elem prefix = sorted res && same_elems res (elem :: prefix) That is, whenever insert is run on a prefix , it expects it to be sorted. Once it finishes, it returns a sorted list res , which has all alements of prefix , and also the inserted elem . Let us notice that the postcondition of insert implies the precondition of walk , at each recursive iteration.

48

Lecture Notes

Furthermore, the invariant of walk becomes the correcntess specification of the top-level sorting function, once t becomes empty, i.e., in its base case. We can now check all of those sepcifications by annotating the code with them: let insert_sort_with_inv ls = let rec walk xs acc = match xs with | [] -> let res = acc in (* walk's postcondition *) assert (sorted_spec ls res); res | h :: t -> let rec insert elem remaining = match remaining with | [] -> (* insert's postcondition *) assert (insert_sort_insert_post [elem] elem remaining); [elem] | h :: t as l -> if h < elem then ( (* insert's precondition *) assert (insert_sort_insert_pre elem t); let res = insert elem t in (* insert's postcondition *) (assert (insert_sort_insert_post (h :: res) elem remaining); h :: res)) else let res = elem :: l in (* insert's postcondition *) (assert (insert_sort_insert_post res elem remaining); res) in let acc' = ( (* insert's precondition *) assert (insert_sort_insert_pre h acc); insert h acc) in (* walk's precondition *) assert (insert_sort_walk_inv ls t acc'); walk t acc' in assert (insert_sort_walk_inv ls ls []); walk ls []

Lecture Notes

49

Exercises

Exercise 1 Give an example of a real-life application that requires an implementation of and algorithm (or several algorithms) as its part, and discuss the algorithms involved: how do they interact, what are they inputs and outputs.

Exercise 2 Programming in OCaml in Emacs is much more pleasant with instant navigation, auto-completion and type information available. Install all the necessary sofwtare following the provided Software Prerequisites .

Exercise 3 What is the termination measure of f : 'a list -> int -> int

walk

within

find_min

? Define it as a function

and change the implementation of walk annotating it with outputs

to check that the measure indeed decreases. • Hint: use OCaml’s Printf.printf utility to output results of the termination measure midexecution.

Exercise 4 • Implement the function find_min2 , similar to find_min (also using the auxiliary walk , but without relying on any other auxiliary functions, e.g., sorting) that finds not the minimal element, but the second minimal element. For instance, it should bive the following output on a list [2; 6; 78; 2; 5; 3; 1] : # find_min2 [2; 6; 78; 2; 5; 3; 1];; - : int option = Some 2 Hint: walk is easier to implement if it takes both the “absolute” minimum m1 and the second minimum m2 , i.e., has the type int list -> int -> int -> int . • Write its specification (a relation between its input/output). Hint: the following definition might be helpful:

Lecture Notes

50

let is_min2 ls m1 m2 = m1 < m2 && List.for_all (fun e -> e == m1 || m2 prefix | h :: t -> let rec insert elem acc remaining run =

Lecture Notes

51 if not run then acc else match remaining with | [] -> acc @ [elem] | h :: t as l -> if h < elem then let run' = true in let acc' = acc @ [h] in insert elem acc' t run' else let run' = false in let acc' = acc @ (elem :: l) in insert elem acc' t run' in let acc' = insert h [] prefix true in walk t acc' in walk ls [] • Define the invariants for auxiliary functions: let insert_inv prefix elem acc remaining run = (* ... *) let insert_sort_tail_walk_inv ls xs acc = (* ... *) Annotate the implementation above with them and test it.

• Transform insert_sort_tail into an imperative version, which uses (nested) loops instead of recursion.

Week 02: Working with Arrays Arrays and Operations on Them • File: ArrayUtil.ml So far the main data structure we’ve been looking at and using as a container is an algebraic list. While simple to work with and grow by adding new elements to the beginning, algebraic lists have a significant shortcoming: they do not allow an instant access to their elements. For instance, in a list [6; 8; 5; 2; 3; 7; 0] , in order to obtain its fourth element, one needs to “peel off” four previous elements by means of deconstructing the list, as implemented by the function nth from the standard OCaml library:

Lecture Notes

52

let nth l n = if n < 0 then invalid_arg "List.nth" else let rec walk l n = match l with | [] -> failwith "nth" | a::l -> if n = 0 then a else walk l (n-1) in walk l n Arrays are similar but also complementary to lists. They also encode data structured in a sequence, but allow immediate access to their elements, referred to by an index (i.e., position in an array). At the low-level, arrays are implemented by means of fixed offsets , and take the full advantage of the random-access memory (RAM), implemented by the modern computer architectures and allowing one to access a location with a known address almost immediately. The price to pay for the instant access is the inability to change the size of an array dynamically. In essence, once array is created, it “reserves” a fixed sequence of memory locations in RAM. Indeed, since more data can be allocated after the array, it is not easy to allow for its future growth. Therefore, the only way to extend (or shrink) and array is to allocate a new array of the necessary size. In OCaml, arrays with all known elements can be created using the following syntax: let a1 = [|6; 8; 5; 2; 3; 7; 0|] creates an array with 7 numbers and assigns its reference to a1 . It is also possible to create an array of a fixed size, filled with some “default” element. For instance,: let a2 = Array.make 10 0 Creates an array of size 10, filled with zeroes. Elements of an array are accessed using their indices: # # -

a1.(2);; : int = 5 a1.(0);; : int = 6

Notice that the indices start from 0 (not from 1), and end with the number equal to an array’s length minus one. This is often confusing and might lead to the infamous Off-by-one error . An attempt to address the elements outside of this range lead to an exception:

Lecture Notes

53

# a1.(7);; Exception: Invalid_argument "index out of bounds". One can determine the range of indices as the length of an array as follows: # # -

Array.length a1;; : int = 7 a1.((Array.length a1) - 1);; : int = 0

The elements of an array can be altered using the following syntax. Notice, that upon changing an array’s element, no new array is created (hence the update’s type is unit ), and it’s the initial array that is modified. In this sense, arrays are similar to references, that are modified in-place: # # # -

a1;; : int array = [|6; 8; 5; 2; 3; 7; 0|] a1.(0) = h) t && sorted t let sub_array_sorted l u arr = let ls = sub_array_to_list l u arr in sorted ls let array_sorted arr = sub_array_sorted 0 (Array.length

arr) arr

The following functions check that an elemen min is a minimum with resepct to a particular subarray: let is_min ls min = List.for_all (fun e -> min 0 && arr.(!j) < arr.(!j - 1) do assert (insert_sort_inner_loop_inv j i arr); swap arr !j (!j - 1); j := !j - 1; assert (insert_sort_inner_loop_inv j i arr); done; assert (insert_sort_outer_loop_inv (i + 1) arr) done

Notice that at the end of the inner loop, the three conjuncts of insert_sort_inner_loop_inv together imply that the entire prefix arr.(0) ... arr.(i) is sorted, i.e., the new element is correctly positioned within it.

Termination of Insertion Sort It is not difficult to prove that insertion sort terminates: its outer loop is an iteration, bounded by len - 1 . Its inner loop’s termination measure (variant) is j , so the loop terminates when j reaches 0 .

Selection Sort • File: SelectSortArray.ml Selection sort is another sorting algorithm based on finding a minimum in an array. Unlike insertion sort, which locates each new element in an already sorted prefix, selection sort obtains the sorted prefix by “extending” it, at each iteration, with a minimum of a not-yet sorted suffix of the array: let select_sort arr = let len = Array.length arr in for i = 0 to len - 1 do for j = i + 1 to len - 1 do if arr.(j) < arr.(i) then swap arr i j else () done done

Tracing Selection Sort Let us print intermediate stages of the selection sort as follows:

Lecture Notes

59

let select_sort_print arr = let len = Array.length arr in for i = 0 to len - 1 do print_int_sub_array 0 i arr; print_int_sub_array i len arr; print_newline (); for j = i to len - 1 do print_offset (); Printf.printf "j = %d, a[j] = %d, a[i] = %d: " j arr.(j) arr. (i); print_int_sub_array 0 i arr; print_int_sub_array i len arr; print_newline (); if arr.(j) < arr.(i) then swap arr i j else () done; print_int_sub_array 0 (i + 1) arr; print_int_sub_array (i + 1) len arr; print_newline (); print_newline (); done This results in the following output: # select_sort_print a1;; [| |] [| 6; 8; 5; 2; 3; 7; 0 |] j = 0, a[j] = 6, a[i] = 6: [| j = 1, a[j] = 8, a[i] = 6: [| j = 2, a[j] = 5, a[i] = 6: [| j = 3, a[j] = 2, a[i] = 5: [| j = 4, a[j] = 3, a[i] = 2: [| j = 5, a[j] = 7, a[i] = 2: [| j = 6, a[j] = 0, a[i] = 2: [| [| 0 |] [| 8; 6; 5; 3; 7; 2 |] [| 0 |] [| 8; j = 1, a[j] j = 2, a[j] j = 3, a[j] j = 4, a[j] j = 5, a[j] j = 6, a[j] [| 0; 2 |] [|

6; 5; 3; 7; = 8, a[i] = = 6, a[i] = = 5, a[i] = = 3, a[i] = = 7, a[i] = = 2, a[i] = 8; 6; 5; 7;

2 |] 8: [| 8: [| 6: [| 5: [| 3: [| 3: [| 3 |]

|] |] |] |] |] |] |]

0 0 0 0 0 0

[| [| [| [| [| [| [|

|] |] |] |] |] |]

6; 6; 6; 5; 2; 2; 2;

[| [| [| [| [| [|

8; 8; 8; 8; 8; 8; 8;

8; 8; 6; 5; 3; 3;

5; 5; 5; 6; 6; 6; 6;

6; 6; 8; 8; 8; 8;

2; 2; 2; 2; 5; 5; 5;

5; 5; 5; 6; 6; 6;

3; 3; 3; 3; 3; 3; 3;

3; 3; 3; 3; 5; 5;

7; 7; 7; 7; 7; 7; 7;

7; 7; 7; 7; 7; 7;

2 2 2 2 2 2

0 0 0 0 0 0 0

|] |] |] |] |] |] |]

|] |] |] |] |] |]

[| 0; 2 |] [| 8; 6; 5; 7; 3 |] j = 2, a[j] = 8, a[i] = 8: [| 0; 2 |] [| 8; 6; 5; 7; 3 |] j = 3, a[j] = 6, a[i] = 8: [| 0; 2 |] [| 8; 6; 5; 7; 3 |]

Lecture Notes

60 j = j = j = [| 0;

4, 5, 6, 2;

a[j] a[j] a[j] 3 |]

= 5, a[i] = = 7, a[i] = = 3, a[i] = [| 8; 6; 7;

6: [| 0; 2 |] [| 6; 8; 5; 7; 3 |] 5: [| 0; 2 |] [| 5; 8; 6; 7; 3 |] 5: [| 0; 2 |] [| 5; 8; 6; 7; 3 |] 5 |]

[| 0; j = j = j = j = [| 0;

2; 3, 4, 5, 6, 2;

3 |] a[j] a[j] a[j] a[j] 3; 5

[| 8; 6; 7; = 8, a[i] = = 6, a[i] = = 7, a[i] = = 5, a[i] = |] [| 8; 7;

5 |] 8: [| 8: [| 6: [| 6: [| 6 |]

[| 0; j = j = j = [| 0;

2; 4, 5, 6, 2;

3; 5 |] [| 8; 7; a[j] = 8, a[i] = a[j] = 7, a[i] = a[j] = 6, a[i] = 3; 5; 6 |] [| 8;

6 |] 8: [| 0; 2; 3; 5 |] [| 8; 7; 6 |] 8: [| 0; 2; 3; 5 |] [| 8; 7; 6 |] 7: [| 0; 2; 3; 5 |] [| 7; 8; 6 |] 7 |]

[| 0; j = j = [| 0;

2; 5, 6, 2;

3; 5; 6 |] [| 8; a[j] = 8, a[i] = a[j] = 7, a[i] = 3; 5; 6; 7 |] [|

7 |] 8: [| 0; 2; 3; 5; 6 |] [| 8; 7 |] 8: [| 0; 2; 3; 5; 6 |] [| 8; 7 |] 8 |]

0; 0; 0; 0;

2; 2; 2; 2;

3 3 3 3

|] |] |] |]

[| [| [| [|

8; 8; 6; 6;

6; 6; 8; 8;

7; 7; 7; 7;

5 5 5 5

|] |] |] |]

[| 0; 2; 3; 5; 6; 7 |] [| 8 |] j = 6, a[j] = 8, a[i] = 8: [| 0; 2; 3; 5; 6; 7 |] [| 8 |] [| 0; 2; 3; 5; 6; 7; 8 |] [| |] - : unit = () Notice that at each iteration of the inner loop, a new minimum of the remaining suffix is identified and at the end this is what becomes and “extension” of the currently growing prefix: 0 , 2 , 3 , 5

, etc. During the inner iteration, we look for minimum in the same way we were looking for a

minimum in a list. All elements in the non-sorted suffix are larger or equal than elements in the prefix. The current element arr.(i) is, thus a minimum of the prefix-of-the-suffix of the array, yet it’s larger than any element in the prefix.

Invariants of Selection Sort The observed above intuition can be captured by the following invariants: let suffix_larger_than_prefix i arr = let len = Array.length arr in let prefix = sub_array_to_list 0 i arr in let suffix = sub_array_to_list i len arr in List.for_all (fun e -> List.for_all (fun f -> e [||] | h :: t -> let arr = Array.make (List.length ls) h in List.iteri (fun i v -> arr.(i) walk t1 t2 (fun acc -> k ((h1, h2) :: acc)) | _ -> k [] in walk ls1 ls2 (fun x -> x) We can finally implement an generator for key-value arrays: let generate_key_value_array len = let kvs = list_zip (generate_keys len len) (generate_words 5 len) in list_to_array kvs It can be used as follows: # generate_key_value_array 10;; - : (int * string) array = [|(1, "emwbq"); (3, "yyrby"); (7, "qpzdd"); (7, "eoplb"); (6,

65

Lecture Notes

"wrpgn"); (7, "jbkbq"); (7, "nncgq"); (1, "rruxr"); (8, "ootiw"); (7, "halys")|] Additionally, we can implement simpler generators for arrays of integers and strings: let generate_int_array len = generate_keys len len |> list_to_array let generate_string_array len = generate_words 5 len |> list_to_array The “pipeline” operator |> is ised in OCaml to provide its left operand as an input to its right operand, which must be a function.

Measuring execution time • File: Util.ml (look for Week 03 functions ) For our future experiments with algorithms and data structures, it is useful to be able to measure execution time of computations we run. To do so, we implement the following helper function: let time f x = let t = Sys.time () in let fx = f x in Printf.printf "Execution elapsed time: %f sec\n" (Sys.time () -. t); fx It can be used with any arbitrary computation that takes at least one argument (thanks to currying).

Randomised array generation and testing Let us make use of the random generators for testing insert_sort and its performance: open InsertSortArray;; # let a = generate_key_value_array 5000;; val a : (int * string) array = [|(894, "goavt"); (2768, "hvjjb"); (3535, "pbkoy"); (1615, "ybzua"); (2820, "ssriq"); (2060, "sfxsu"); (2328, "kjgff"); (112, "xuoht"); (1188, "xxfcs"); (2384, "xbwgb");

66

Lecture Notes

(1134, "oi"... (* string length 5; truncated *)); (3102, ...); ...|] # time insert_sort a;; Execution elapsed time: 0.395832 sec - : unit = () Notice that the comparison operator < in OCaml is overloaded and works not only on integers but on arbitrary values, implementing an ad-hoc comparison. Thanks to this, even though we initially designed insert_sort to work on arrays of integers, it works on arrays of pairs equally well.

Complexity of Algorithms Having experimented with different implementations of computing a determinant of a matrix, via Laplace expansion of LU-decomposition, we have observed that the performance in the former case is significantly worse than in the latter one, roughy illustrated by the plots above:

While the absolute execution time might differ depending on the performance of the computer that executes the program, what is important is how quickly the performance deteriorates, as we increase the size of the input (i.e., the rank of the matrix, in this case). Our goal is, thus, to estimate how slow/fast are our algorithms (i.e., what is their time demand). For this, we will use mathematical functions of the input size to describe time demand of a specific algorithm. Specifically, we want to formulate time demand (aka algorithmic complexity ) functions in a Machine-independent way, focusing on the asymptotic growth, rather than its exact values (different for each CPU).

67

Lecture Notes

The machine-independent demand is characterised by the formal conventions, taken in order to make the reasoning uniform: • Elementary operations take different time on various machines, but this difference does not matter for the relative time demand. • A machine-independent measure of time is given by counting elementary operations (not their time). Examples of elementary operations include: addition, multiplication, AND, OR, comparisons, assignments. • In some cases it is common to neglect “cheaper” elementary operations, focusing only on more “expensive” ones (e.g., multiplication beats addition).

Order Notation Let us introduce a concise notation for asymptotic behaviour of time demand functions as an input size \(n\) of an algorithm grows infinitely, i.e., \(n \rightarrow \infty\) .

Big O-notation  Definition The positive-valued function \(f(x) \in O(g(x))\) if and only if there is a value \(x_0\) and a constant \(c > 0\) , such that for all \(x \geq x_0\) , \(f(x) \leq c \cdot g(x)\) .

This definition can be illustrated as follows:

68

Lecture Notes

The intuition is that \(f(x)\) grows no faster than \(c \cdot g(x)\) as \(x\) gets larger. Notice that the notation \(O(g(x))\) describes a set of functions, “approximated” by \(g(x)\) , modulo constant factors and the starting point.

Properties of Big O-notation  Property 1 \(O(k \cdot f(n)) = O(f(n))\) , for any constant \(k\) .

Multiplying by \(k\) just means re-adjusting the values of the arbitrary constant factor \(c\) in the definition of big-O. This property ensures machine-independence (i.e., we can forget about constant factors). Since \(\log_{a}n = \log_{a}b \times \log_{b}n\) , we don’t need to be specific about the base when saying \(O(\log~n)\) .  Property 2 \(f(n) + g(n) \in O(\max(f(n), g(n))\)

69

Lecture Notes

Here, \(\max((f(n), g(n))\) is a function that for any n, returns the maximum of \(f(n)\) and \ (g(n))\) :

The property follows from the fact that for any \(n, f(n) + g(n) \leq 2 \cdot f(n)\) or \(f(n) + g(n) \leq 2 \cdot g(n)\) . Therefore, \(f(n) + g(n) \leq 2 \cdot \max(f(n), g(n))\) .  Property 3 \(\max(f(n), g(n)) \in O(f(n) + g(n))\) .

This property follows from the fact that for any \(n, \max(f(n), g(n)) \leq f(n)\) or \(\max(f(n), g(n)) \leq g(n)\) . Therefore, \(\max(f(n), g(n)) \leq f(n) + g(n)\) .  Corollary \(O(f(n) + g(n)) = O(\max(f(n), g(n))\) .

 Property 4

70

Lecture Notes

If \(f(n) \in O(g(n))\) , then \(f(n) + g(n) \in O(g(n))\) .

Follows from the fact that there exist \(c, n_0\) , such that for any \(n \geq n_0, f(n) \leq c \cdot g(n)\) ; Therefore, for any \(n \geq n_0, f(n) + g(n) \leq (c + 1) \cdot g(n)\) . Intuitively, a fastergrowing function eventually dominates.

Little o-notation  Definition The positive-valued function \(f(x) \in o(g(x))\) if and only if for all constants \(\varepsilon > 0\) there exists a value \(x_0\) such that for all \(x \geq x_0, f(x) \leq \varepsilon \cdot g(x)\) .

This definition provides a tighter boundary on \(f(x)\) : it states that \(g(x)\) grows much faster (i.e., more than a constant factor times faster) than \(f(x)\) .  Example We can show that \(x^2 \in o(x^3)\) , as for any \(\varepsilon > 0\) we can take \ (x_0(\varepsilon) = \frac{1}{\varepsilon} + 1\) , so for all \(x \geq x_0(\varepsilon), \varepsilon \cdot x^3 \geq \varepsilon \cdot (\frac{1}{\varepsilon} + 1) \cdot x^2 > x^2\) .

Proofs using O-notation Standard exercise: show that \(f(x) \in O(g(x))\) (or not) is approached as follows: • Unfold the definition of O-notation; • Assuming that the statement is true, try to find a fixed pair of values \(c\) and \(x_0\) from the definition to prove that the inequality holds for any \(x\) ; • If such fixed pair cannot be found, as it depends on the value of \(x\) , then the universal quantification over \(x\) in the definition doesn’t hold, hence \(f(x) \notin O(g(x))\) . Example 1 : Is \(n^2 \in O(n^3)\) ? Assume this holds for some \(c\) and \(n_0\) , then:

71

Lecture Notes

\[\begin{split}\begin{align*} & n^2 - c \cdot n^3 \leq 0,~\text{for all}~n \geq n_0 \\ \implies & n^2 (1 - c \cdot n) \leq 0,~\text{for all}~n \geq n_0 \\ \implies & 1 \leq c \cdot n,~\text{for all}~n \geq n_0 \\ \implies & n \geq \frac{1}{c},~\text{for all}~n \geq n_0 \\ \end{align*}\end{split}\] As this clearly holds for \(n_0 = 2\) and \(c = 1\) , we may conclude that \(n^2 \in O(n^3)\) . \(\square\) Example 2 : Is \(n^3 \in O(n^2)\) ? Assume this holds for some \(c\) and \(n_0\) , then: \[\begin{split}\begin{align*} & n^3 - c \cdot n^2 \leq 0,~\text{for all}~n \geq n_0 \\ \implies & n^2 \cdot (n - c) \leq 0,~\text{for all}~n \geq n_0 \\ \implies & n - c \leq 0,~\text{for all}~n \geq n_0 \\ \end{align*}\end{split}\] Now, since \(c\) and \(n_0\) are arbitrary, but fixed, we can consider \(n = c + 1 + n_0\) (and so we can do for any \(c\) and \(n_0\) ), so we see that the inequality doesn’t hold, hence in this case no fixed \(c\) and \(n_0\) can be found to satisfy it for any \(n\) . Therefore \(n^3 \notin O(n^2)\) . \(\square\)

Hierarchy of algorithm complexities

Complexity of sequential composition Consider the following OCaml program, where a is a value of size n :

Lecture Notes

72

let x = f1(a) in

x + f2(a)

Assuming the complexity of f1 is \(f(n)\) and the complexity of f2 is \(g(n)\) , executing both of them sequentially leads to summing up their complexity, which is over-approximated by \ (O(\max(f(n), g(n))\) . This process of “collapsing” big O’s can be repeated for a finite number of steps, when it does not depend on the input size.

Sums of Series and Complexities of Loops So far we have seen the complexities of simple straight-line programs, taking the maximum of their asymptotic time demand, using the property of big-O with regard to the maximum. Unfortunately, this approach does not work if the number of steps an algorithm makes depends on the size of the input. In such cases, an algorithm typically features a loop, and the demand of a loop intuitively should be obtained as a sum of demands of its iterations. Consider, for instance, the following OCaml program that sums up elements of an array: let sum = ref 0 in for i = 0 to n - 1 do sum := !sum + arr.(i) done !sum Each individual summation has complexity \(O(1)\) . Why can’t we obtain the overall complexity to be \(O(1)\) if we just sum them using the rule of maximums ( \(\max(O(1), \ldots, O(1)) = O(1)\) )? The problem is similar to summing up a series of numbers in math: \[\underbrace{1 + 1 + \ldots + 1}_{k~\text{times}} = \sum_{i=1}^{k}1 = k\] but also \[\lim_{n \rightarrow \infty} \sum_{i=1}^{n}1 = \infty\] What in fact we need to estimate is how fast does this sum grow, as a function of its upper limit \ (n\) , which corresponds the number of iterations: \[\sum_{i=1}^{n}1 = \underbrace{1 + 1 + \ldots + 1}_{n~\text{times}} = n \in O(n)\] By distributivity of the sums: \[\sum_{i=1}^{n} k = \underbrace{k + k + \ldots + k}_{n~\text{times}} = n \times k \in O(n)\] In general, such sums are referred as series in mathematics and have the standard notation as follows:

73

Lecture Notes

\[\sum_{i= a}^{b} f(i)= f(a) + f(a + 1) + \ldots + f(b)\] where \(a\) is called the lower limit, and \(b\) is the upper limit. The whole sum is \(0\) if \(a < b\) .

Arithmetic series Arithmetic series are the series of the form \(\sum_{i=a}^{b}i\) . Following the example of Gauss, one can notice that \[\begin{split}\begin{align*} 2 \times \sum_{i=1}^{n} i &= 1 + 2 + \ldots + (n - 1) + n = \\ &= n + (n 1) + \ldots + 2 + 1 \\ &= n \cdot (n + 1) \end{align*}\end{split}\] This gives us the formula for arithmetic series: \[\sum_{i=1}^{n}i = \frac{n \cdot (n + 1)}{2} \in O(n^2)\] Somewhat surprisingly, an arithmetic series starting at constant non-1 lower bound has the same complexity: \[\sum_{i=j}^{n}i = \sum_{i=1}^{n}i - \sum_{i=1}^{j - 1}i = \frac{n \cdot (n + 1)}{2} - \frac{j \cdot (j 1)}{2} \in O(n^2)\]

Geometric series Geometric series are defined as series of exponents: \[S(n) = a + a^2 + a^3 + \ldots + a^n = \sum_{i=1}^{n}a^i\] Let us notice that \[\begin{split}\begin{align*} a \cdot S(n) &= a^2 + a^3 + \ldots + a^{n + 1} \\ &= S(n) - a + a^{n+1} \end{align*}\end{split}\] Therefore, for \(a \neq 1\) \[S(n) = \frac{a (1 - a^n)}{1 - a}\]

Estimating a sum by an integral Sometimes it is difficult to write an explicit expression for a sum. The following trick helps to estimate sums of values of monotonically growing functions: \[\sum_{i=1}^{n}f(i) \leq \int_{1}^{n+1} f(x) dx\]

74

Lecture Notes

Example : What is the complexity class of \(\sum_{i=1}^{n}i^3\) ? We can obtain it as follows: \[\sum_{i=1}^{n}i^3 \leq \int_{1}^{n+1} x^3 dx = \left[\frac{x^4}{4}\right]_{1}^{n+1} = \frac{(n + 1)^4 - 1}{4} \in O(n^4)\]

Big O and function composition Let us take \(f_1(n) \in O(g_1(n))\) and \(f_2(n) \in O(g_2(n))\) . Assuming \(g_2(n)\) grows monotonically, what would be the tight enough complexity class for \(f_2(f_1(n))\) ? It’s tempting to say that it should be \(g_2(g_2(n))\) . However, recalls that by the definition of big O, \(f_1(n) \leq c_1\cdot g_2(n)\) and \(f_2(n) \leq c_2\cdot g_2(n)\) for \(n \geq n_0\) and some constants \(c_1, c_2\) and \(n_0\) . By monotonicity of \(g_2\) we get \[f_2(f_1(n)) \leq c_2 \cdot g_2(f_1(n)) \leq c_2 \cdot g_2(c_1 \cdot g_1(n)).\] Therefore \[f_2(f_1(n)) \in O(g_2(c_1 \cdot g_1(n)))\]

75

Lecture Notes

The implication of this is one should thread function composition with some care. Specifically, it is okay to drop \(c_1\) if \(g_2\) is a polynomial, logarithmic, or their composition, since: \[\begin{split}\begin{align*} (c\cdot f(n))^k &= c^k \cdot f(n)^k \in O(f(n)^k) \\ \log(c\cdot f(n)) &= \log c + \log(f(n)) \in O(\log(f(n)) \end{align*}\end{split}\] However, this does not work more fast-growing functions \(g_2(n)\) , such as an exponent and factorial: \[\begin{split}\begin{align*} k^{c\cdot f(n)} &= (k^c)^{f(n)} \notin O(k^{f(n)}) \\ (c \cdot f(n))! &= (c\cdot f(n)) \cdot (c\cdot f(n) - 1) \cdot \ldots \cdot (f(n))! \notin O((f(n))!) \end{align*} \end{split}\]

Complexity of algorithms with loops Let us get back to our program that sums up elements of an array: let sum = ref 0 in for i = 0 to n - 1 do sum := !sum + arr.(i) done; !sum The first assignment is an atomic command, and so it the last references, hence they both take \ (O(1)\) . The bounded for -iteration executes \(n\) times, each time with a constant demand of its body, hence it’s complexity is \(O(n)\) . To summarise, the overall complexity of the procedure is \(O(n)\) . Let us now take a look at one of the sorting algorithms that we’ve studies, namely, Insertion Sort: let insert_sort arr = let len = Array.length arr in for i = 0 to len - 1 do let j = ref i in while !j > 0 && arr.(!j) < arr.(!j - 1) do swap arr !j (!j - 1); j := !j - 1 done done Assuming that the size of the array is \(n\) , the outer loop makes \(n\) iterations. The inner loop, however, goes in an opposite direction and starts from \(j\) such that \(0 \leq j < n\) and, in the worst case, terminates with \(j = 0\) . The complexity of the body of the inner loop is linear (as swap performs three atomic operations, and the assignment is atomic). Therefore, we can estimate the complexity of this sorting by the following sum (assuming \(c\) is a constant accounting for the complexity of the inner loop body):

76

Lecture Notes

\[\sum_{i=0}^{n-1}\sum_{j=0}^{i}c = c \sum_{i=0}^{n - 1}i = c\frac{n (n - 1)}{2} \in O(n^2).\] With this, we conclude that the complexity of the insertion sort is quadratic in the size of its input, i.e., the length of the array.

Complexity of Simple Recursive Algorithms In this chapter we will study complexity of the programs that combine both loops and recursion.

Complexity of computing the factorial Typically, any terminating recursive algorithm works by calling itself on progressively smaller instances of some data structures. For instance, consider the “Hello, World!” of all recursive programs — the factorial function: let rec factorial n = if n a\) for some \(a\) .

 Example For some \(c > 0\) : \[\begin{split}\begin{align*} f(0) &= 1 \\ f(n) &= c \cdot f (n - 1) \end{align*}\end{split}\] By inspection and unfolding the definition of \(f(n)\) , we get the solution \(f(n) = c^n\) .

 Definition Homogeneous recurrence relations take the following form for some constants \(a\) , \(f(n)\) and a coefficient \(b_n\) , which might be a function of \(n\) :

78

Lecture Notes

\[\begin{split}\begin{align*} f(n) &= b_n \cdot f(n - 1) ~\text{if}~ n > a \\ f(a) &= d \end{align*} \end{split}\]

By unfolding the definition recursively, we can obtain the following formula to solve it: \[\begin{split}\begin{align*} f(n) &= b_n \cdot f(n - 1) \\ &= b_n \cdot b_{n-1} \cdot f(n - 2) \\ & \ldots \\ &= b_n \cdot b_{n - 1} \cdot \ldots \cdot b_{a + 1} \cdot f(a) \\ &= b_n \cdot b_{n - 1} \cdot \ldots \cdot b_{a + 1} \cdot d \end{align*}\end{split}\] Therefore: \[f(n) = \left( \prod_{i = a + 1}^{n}b_i \right) \cdot f(a)\] You can try to remember that formula, but it’s easier to remember how it is obtained.

Inhomogeneous recurrence relations  Definition Inhomogeneous recurrence relations take the following form for some constants \(a\) , \(f(n) \) and a coefficient \(b_n\) and \(c_n\) , which might be functions of \(n\) : \[\begin{split}\begin{align*} f(n) &= b_n \cdot f(n - 1) + c_n ~\text{if}~ n > a \\ f(a) &= d \end{align*}\end{split}\]

The trick to solve an inhomogeneous relation is to “pretend” that we are solving a homogeneous recurrence relation by changing the function \(f(n)\) to \(g(n)\) , such that \[\begin{split}\begin{align*} f(n) &= b_{a+1}\cdot \ldots \cdot b_n \cdot g(n) ~\text{if}~ n > a \\ f(a) &= g(a) = d \end{align*}\end{split}\] Intuitively, this “change of function” allows us to reduce a general recurrence relation to the one where \(b_n = 1\) . In other words, \(g(n)\) is a “calibrated” version of \(f(n)\) that behaves “like” \ (f(n)\) module the appended product of coefficients. Let us see how this trick helps us to solve the initial relation. We start by expanding the definition of \(f(n)\) for \(n > 0\) as follows: \[f(n) = b_n \cdot f(n - 1) + c_n\] We then recall that \(f(n)\) can be expressed via \(g(n)\) , and rewrite both parts of this equation as follows: \[\underbrace{b_{a+1}\cdot \ldots \cdot b_n}_{X} \cdot g(n) = \underbrace{b_n \cdot b_{a+1}\cdot \ldots \cdot b_{n-1}}_{X} \cdot g(n - 1) + c_n\] Notice that the parts marked via \(X\) are, in fact the same, so we can divide both parts of the expression by it, so we can get

79

Lecture Notes

\[g(n) = g(n - 1) + d_n ~\text{where}~ d_n = \frac{c_n}{\prod_{i = a + 1}^{n}b_i}.\] We can now solve the recurrence on \(g(n)\) via the method of difference, obtaining \[g(n) = g(a) + \sum_{j = a + 1}^{n}d_j ~\text{where}~ d_j = \frac{c_j}{\prod_{k = a + 1}^{j}b_k}\] The final step is to obtain \(f(n)\) by multiplying \(g(n)\) by the corresponding product. This way we obtain: \[f(n) = \prod_{i = a + 1}^{n} b_i \cdot \left(g(a) + \sum_{j = a + 1}^{n}d_j\right) ~\text{where}~ d_j = \frac{c_j}{\prod_{k = a + 1}^{j}b_k}\] As in the previous case, it is much easier to remember the “trick” with introducing \(g(n)\) and reproduce it every time you solve a relation, than to remember that formula above! In the examples we’ll, the initial index \(a\) will be normally be 0 or 1. The techniques for series summation and approximation will come useful when dealing with coefficients \(d_j\) .  Example Consider the following recurrence relation: \[\begin{split}\begin{align*} f(n) &= 3 \cdot f(n - 1) + 1 ~\text{if}~ n > 0 \\ f(0) &= 0 \end{align*}\end{split}\] We start by changing the function so \(f(n) = 3^n \cdot g(n)\) for an unknown \(g(n)\) , since \ (b_i = 3\) for any \(i\) . Substityting for \(f(n)\) gives us \[g(n) = g(n - 1) + \frac{1}{3^n}\] By method of differences, we obtain \[g(n) = \sum_{i = 1}^{n}\frac{1}{3^i} = \left[\sum_{i = 1}^{n}\frac{1}{a^i}\right]_{a = \frac{1}{3}} = \left[\frac{a (1 - a^n)}{1-a}\right]_{a = \frac{1}{3}} = \frac{1}{2}\left(1 - \frac{1}{3^n}\right)\] Finally, restoring \(f(n)\) , we get \[f(n) = 3^n \cdot g(n) = \frac{3^n}{2}\left(1 - \frac{1}{3^n}\right) = \frac{1}{2} \left(3^n 1\right) \in O(3^n)\]

Exercises

Exercise 1: Realistic Complexity of Laplace Expansion Recall the definition of a matrix determinant by Laplace expansion \[|M| = \sum_{i = 0}^{n - 1}(-1)^{i} M_{0, i} \cdot |M^{0, i}|\]

Lecture Notes

80

where \(M^{0, i}\) is the corresponding minor of the matrix \(M\) of size \(n\) , with indexing starting from \(0\) . This definition can be translated to OCaml as follows: let rec detLaplace m n = if n = 1 then m.(0).(0) else let det = ref 0 in for i = 0 to n - 1 do let min = minor m 0 i in let detMin = detLaplace min (n - 1) in det := !det + (power (-1) i) * m.(0).(i) * detMin done; !det A matrix is encoded as a 2-dimensional array m , whose rank (both dimensions) is n . Here, minor

returns the minor of the matrix m , and power a b returns the natural power of b of an

integer value a . Out of the explanations and the code above, estimate (in terms of big-O notation) the time complexity \(t(n)\) of the recursive determinant computation. Start by writing down a recurrence relation on \(t(n)\) . Assume that the complexity of minor is \(c \cdot n^2\) for some constant \ (c\) . Consider the complexity of returning an element of an array to be 0 (i.e., \(t(1) = 0\) ). For \(n > 1\) , power , addition, multiplication and other primitive operations to be constants and approximate all of them by a single constant \(c\) .

Exercise 2 Implement a function that generates takes (a) a sorting procedure sort for a key-value array, (b) a number n and a number length , and generates n random arrays of the length length , testing that sort is indeed correct on all those arrays.

Exercise 3 Find a procedure that takes an unsorted array and a given range of keys (represented by a pair of numbers lo < hi , right boundary not included), and returns the list of all elements in the array, whose keys are in that range. Estimate the complexity of this procedure.

Week 04: Divide-and-Conquer Algorithms

Lecture Notes

81

Searching in Arrays • File: SearchArray.ml Let us put key-value arrays to some good use.

Linear Search One of the most common operations with key-value arrays is searching , i.e., looking for the index of an element with some known key, or discovering that there is not such element in the array. The simplest implementation of searching walks through the entire array until the sought element is found, or the whole array is traversed: let linear_search arr k = let len = Array.length arr in let res = ref None in let i = ref 0 in while !i < len && !res = None do (if fst arr.(!i) = k then res := Some ((!i, arr.(!i)))); i := !i + 1 done; !res We can now test it on a random array: let a1 = [|(9, "lgora"); (0, "hvrxd"); (2, "zeuvd"); (2, "powdp"); (8, "sitgt"); (4, "khfnv"); (2, "omjkn"); (0, "txwyw"); (0, "wqwpu"); (0, "hwhju")|];; # # -

linear_search : (int * (int linear_search : (int * (int

a1 4;; * string)) option = Some (5, (4, "khfnv")) a1 10;; * string)) option = None

In the first case, linear_search has returned the index ( 5 ) of an element with the key 4, as well as the element itself. In the second case, it returns None , as there is no key 10 in the array a1 .

Binary Search Binary search is an efficient search procedure that works on a sorted array and looks for an element in it, repeatedly dividing its search-space by half:

82

Lecture Notes

let rec binary_search arr k = let rec rank lo hi = if hi Printf.printf "(%d, %s); " k v) ls; Printf.printf "]\n\n"; if hi fst e = k) ls') We can also annotate our implementation with this invariant and test it:

let binary_search_inv arr k = let rec rank lo hi = Printf.printf "lo = %d, hi = %d\n" lo hi; Printf.printf "Subarray: ["; let ls = array_to_list lo hi arr in List.iter (fun (k, v) -> Printf.printf "(%d, %s); " k v) ls; Printf.printf "]\n"; if hi = len1 (* from1 is exhausted, copy everythin from from2 then (dest.(k) = len2 (* from2 is exhausted, copy everythin from from1 then (dest.(k) C.comp e h >= 0) t && sorted t let sub_array_sorted l u arr = let ls = subarray_to_list l u arr in sorted ls let array_sorted arr = sub_array_sorted 0 (Array.length

arr) arr

let sorted_spec arr1 arr2 = array_sorted arr2 && same_elems (to_list arr1) (to_list arr2) end

103

Lecture Notes

Finally, in the remainder of this chapter it will be so common for us to require both a possibility to print and to compare values of a certain data type, that we merge these two behavioural interfaces into a single module signature, which will be later used to describe a parameter module for various functors that take advantage of both sorting and printing: module type CompareAndPrint = sig type t val comp : t -> t -> int (* For pretty-printing *) val pp : t -> string end

Best-Worst Case for Comparison-Based Sorting A strength and a weakness of Comparison-Based Sorting is the fact that the only operation it relies in its implementation is a primitive to compare two elements. This makes it very generic and applicable to diverse classes of data, but also puts a theoretical limit on how fast (in the asymptotic big-O sence) we can sort arrays if we use only comparison . Let us se what is this theoretical limit. Quicksort, Insertion sort, Merge sort are all comparisonbased sorting algorithms: they compare elements pairwise. An “ideal” algorithm will always perform no more than \(t(n)\) comparisons, for the worst possible case on an array of size \(n\) . What is then \(t(n)\) ? Let us think in the following way. What a sorting delivers is a permutation of an initial array, which is also sorted. A number of possible permutations of \(n\) elements is \(n!\) , and such an algorithm should find “the right one” by following a path in a binary decision tree, where each node corresponds to comparing just two elements. As an example, consider the following decision tree for an array of just three elements [|A; B; C|] :

104

Lecture Notes

Intuitively, by making \(t(n)\) steps in a decision tree, in its search for the “correctly ordered” permutation, the algorithm should be able to say, which ordering it is. Since the number of reachable leaves in \(t(n)\) steps is \(2^{t(n)}\) , and the number of possible orderings is \(n!\) , it should be the case that \[2^{t(n)} \geq n!\] To obtain the boundary on \(t(n)\) we need to solve this inequality. We can do so by first taking tha logarithm of the both sides: \[t(n) \geq \log_2(n!)\] We can then use Stirling’s formula for large \(n\) , which states that \(n! \approx \sqrt{2\pi n} \left(\frac{n}{e}\right)^n\) . Therefore, we obtain \[t(n) \approx n \log_e n = (\log_e 2) n \log_2 n \in O(n \log n)\] With this we establish that the best possible algorithm for sorting arrays using only comparison will be deemed to perform \(O(n \log n)\) comparisons in the worst case. The complexity class \(O(n \log n)\) is so paramount in the study of algorithms that it deserved its own name: computations having this complexity are often referred to as having linearithmic complexity.

Sorting in Linear Time • File: LinearTimeSorting.ml As we have just determined, one cannot do comparison-based sorting better than in \(O(n \log n) \) in the worst case. However, we can improve this complexity if we base the logic of our algorithm not just on comparisons, but will also exploit the intrinsic properties of the data used as keys for elements to be sorted (e.g., integers). In this chapter we will see some examples of such specialised sorting procedures.

Lecture Notes

105

Simple Bucket Sort Bucket sort works well for the case, when the size of the set, from which we draw the keys is limited by a certain number bnum . In this case, we can allocate an auxiliary array of “buckets” (implemented as lists), which will serve to collect elements with the key corresponding to the bucket number. The code is as follows: let simple_bucket_sort bnum arr = let buckets = Array.make bnum [] in let len = Array.length arr in for i = 0 to len - 1 do let key = fst arr.(i) in let bindex = key mod bnum in let b = buckets.(bindex) in buckets.(bindex) 0\) , its parent can be obtained by taking an index \((i + 1) / 2 - 1\) Let us now define a module that encapsulates all operations with binary heaps (represented via arrays), of which so far know three: finding a parent, a left and a right child of a node: module Heaps (C : CompareAndPrint) include C include ArrayPrinter(C)

= struct

(* 1. Main heap operations *) let parent arr i = if i = 0 then (0, arr.(i)) else let j = (i + 1) / 2 - 1 in (j, arr.(j)) let left arr i = let len = Array.length arr in let j = 2 * (i + 1) - 1 in if j < len then Some (j, arr.(j)) else None let right arr i = let len = Array.length arr in let j = 2 * (i + 1) in if j < len then Some (j, arr.(j)) else None (* More definitions to come here... *) end Notice that for a given index there might be no child, hence both left and right return an option

type. We can instantiate the functor above to work with our familiar arrays of key-value

pairs by supplying the following instance of CompareAndPrint parameter: module KV = struct type t = int * string let comp = key_order_asc let pp (k, v) = Printf.sprintf "(%d, %s)" k v end

Lecture Notes

112

module KVHeaps = Heaps(KV) Let us now create our first binary heap and make sure that it follows the intution from the image above: let good_heap = [|(16, "a"); (14, "b"); (10, "c"); (8, "d"); (7, "e"); (9, "f"); (3, "g"); (2, "h"); (4, "i"); (1, "j");|] We can do so by querying its contents: # # # # # # # # # -

open KVHeaps;; right good_heap 0;; : (int * (int * string)) left good_heap 1;; : (int * (int * string)) right good_heap 1;; : (int * (int * string)) left good_heap 2;; : (int * (int * string)) right good_heap 2;; : (int * (int * string)) parent good_heap 9;; : int * (int * string) = parent good_heap 4;; : int * (int * string) = parent good_heap 1;; : int * (int * string) =

option = Some (2, (10, "c")) option = Some (3, (8, "d")) option = Some (4, (7, "e")) option = Some (5, (9, "f")) option = Some (6, (3, "g")) (4, (7, "e")) (1, (14, "b")) (0, (16, "a"))

Notice that, while not sorted (in an ascending or a descending order), the heap (as per its definition) always has the element with the greatest key at the position 0 of the array.  Definition A heap defined as per the definition above (a parent is larger than children) is called maxheap . A heap, defined via the property (a parent is smaller than children) is called min-heap .

Lecture Notes

113

Checking that an array is a heap Next, we shall write a function that, taking an array, will determine whether it does have heap structure or not. The following definition should be placed within the body of Heap functor: (* 2. Testing whether something is a heap *) let is_heap arr = let open Printf in let len = Array.length arr - 1 in let res = ref true in let i = ref 0 in while !i = 0 in let is_right = r = None || comp this (snd (get_exn r)) >= 0 in res := !res && is_left && is_right; i := !i + 1 done; !res The main machinery of is_heap applies the definition given above, in a while -loop for each element of the array arr , relying on the comparator comp . Notice that the first loop condition !i = 0 in res := !res && is_left && is_right; (if (not !res && print) then ( let (li, ll) = get_exn l in let (ri, rr) = get_exn r in printf "Out-of-order elements:\n"; printf "Parent: (%d, %s)\n" !i (pp this); printf "Left: (%d, %s)\n" li (pp ll); printf "Right: (%d, %s)\n" ri (pp rr) )); i := !i + 1 done; !res This checker features an optional named boolean parameter print (which by default is taken to be false) that can be omitted. This parameter determines whether the debug output has to be switched on. If it is the case and at a certain point the heap-y property breaks, an offending tiple of a parent and two children will be printed out (notice again that a named parameter is called with a tilde, i.e., ~print ): # KVHeaps.is_heap_print ~print:true bad_heap;; Out-of-order elements: Parent: (2, (10, c)) Left: (5, (11, f)) Right: (6, (3, g))

Lecture Notes

115

- : bool = false [1] You can remember the way children are defined for 0-based arrays using the following intuition: shift the current index + 1 to obtain the index as in 1-based array, compute the child index, and then subtract 1, to return back to 0-based indexing . [2] The term “heap” has been originally used to denote “almost-complete binary tree”, but now is also used to refer to “garbage-collected runtime memory”, such as provided by Java and C#. There is no relation between these two notions, and here and further by heaps we will mean binary trees.

Maintaining Binary Heaps • File: Heaps.ml (continued) Let us now fix the broken heap bad_heap by restoring an order in it. As we can see, the issue there is between the parent (10, "c") and a left child (11, "f") that are out of order.

“Heapifying” elements of an array What we need to do is to swap the offending parent with the children (assuming that both subtrees reachable from the children obey the descending order), and also make sure that the swapped element (10, "c") “sinks down”, finding its correct position in a reachable subtree. This procedure of “sinking” is what is implemented by the most important heap-manipulating function shown below: (* 3. Restoring the heap property for an element i *) let rec max_heapify heap_size arr i = let len = Array.length arr in assert (heap_size (len - 1) / 2 || i >= heap_size then () else let ai = arr.(i) in let largest = ref (i, arr.(i)) in let l = left arr i in (* Shall we swap with the left child?.. *) if l None && (fst (get_exn l)) < heap_size && comp (snd (get_exn l)) (snd !largest) > 0 then largest := get_exn l; (* May be the right child is even bigger? *)

Lecture Notes

116 let r = right arr i in if r None && (fst (get_exn r)) < heap_size && comp (snd (get_exn r)) (snd !largest) > 0 then largest := get_exn r; if !largest (i, ai) (* Okay, there is a necessity to progress further... *) then begin swap arr i (fst !largest); max_heapify heap_size arr (fst !largest) end

The implementation of max_heapify deserves some attention, as it is not entirely trivial. It takes three arguments, an integer heap_size (whose role will be explained shortly), and array arr representing the heap, and an index i of a parent element of an offending triple. The heap_size serves the purpose of “limiting the scope” of a heap in an array and is always assumed to be less or equal than the array size. The reason why one might need it is because in some applications (as we will soon see), it is convenient to consider only a certain prefix of an array as a heap (and, thus obeying the heap definition), while the remaining suffix does not to be a part of it. One can, therefore, think of heap_size as of a “separator” between the heap-y and a non-heapy parts of an array. The body of max_heapify is rather straightforward. It first assumes that the element at the position arr.(i) is the largest one. It then tries to retrieve its both children (if those are within the array size and heap size ranges), and determine the largest of them. If such one is present, it becomes the new parent, swapping with previous one. However, such a swap might have broken the heap-property in one of the subtrees, so the procedure needs to be repeated. Hence, the operation happens recursively for the new child (which used to be a parent, and now, after the swap, resides at the position !larger ). Question: Why does max_heapify terminate? Let us now restore the heap using the max_heapify procedure: let bad_heap = [|(16, "a"); (14, "b"); (9, "c"); (8, "d"); (7, "e"); (11, "f"); (3, "g"); (2, "h"); (4, "i"); (1, "j"); (1, "k"); (10, "l"); (6, "m")|];; # open KVHeaps;; # is_heap bad_heap;; - : bool = false # is_heap_print ~print:true bad_heap;; Out-of-order elements: Parent: (2, (9, c))

117

Lecture Notes

Left: (5, (11, f)) Right: (6, (3, g)) - : bool = false # max_heapify 13 bad_heap 2;; - : unit = () # is_heap_print ~print:true bad_heap;; - : bool = true # bad_heap;; - : (int * string) array = [|(16, "a"); (14, "b"); (11, "f"); (8, "d"); (7, "e"); (10, "l"); (3, "g"); (2, "h"); (4, "i"); (1, "j"); (1, "k"); (9, "c"); (6, "m")|] As we can observe the two elements have now been correctly swapped.

Complexity of heapify The maximal number of steps required to reach a child in a tree is called a height of a tree. Notice that the way max_heapify “walks” and array is by taking left/right child of an element. This way, it will make at most \(\log_2 n\) steps (which is the height of a heap). That is, the max_heapify procedure will terminate very quickly.

Building a heap from an array We can now use max_heapify iteratively to turn an arbitrary array into a max-heap. The following code should be added to the Heap functor: let build_max_heap arr = let len = Array.length arr in for i = (len - 1) / 2 downto 0 do max_heapify len arr i done Question: Why does the for -loop start only from i = (len - 1) / 2 , not from len - 1 ? The complexity of build_max_heap can be over-approximated by analysing the complexity of each iteration of the while -loop, and the number of the iteration it makes. Why does this procedure deliver a heap? This can be established by the following invariant, which we state in plain English (implementing it is a home exercise):  Invariant

Lecture Notes

118

At the start of each iteration of the for -loop in build_max_heap , each node i + 1 , i + 2 , len - 1

is a root of a max-heap.

Question: Why does this invariant holds for the elements from the second half of the array? Question: What happens if we start building the heap from the beginning of the array, moving right. How correctness and performance will be affected? Justify your answer by talking about loop invariants. We can test our procedure on some random arrays: # let a = generate_key_value_array 10;; val a : (int * string) array = [|(6, "ktesl"); (9, "herli"); (7, "etqiz"); (4, "wrnqu"); (3, "ceojd"); (2, "cklpw"); (2, "mvcme"); (7, "uowmp"); (5, "yeuzq"); (4, "yuzdw")|] # KVHeaps.build_max_heap a;; - : unit = () # a;; - : (int * string) array = [|(9, "herli"); (7, "uowmp"); (7, "etqiz"); (6, "ktesl"); (4, "yuzdw"); (2, "cklpw"); (2, "mvcme"); (4, "wrnqu"); (5, "yeuzq"); (3, "ceojd")|] # is_heap a;; - : bool = true

Heapsort • File: Heaps.ml (continued) Let us now exploit the ability of a max-heap to always keep the element with the largest key at the beginning, as well as being able to restore a heap from an “almost-heap” (i.e., the one that only has one offending triple) in \(\Theta(n \log n)\) , for construct a very efficient sorting algorithm — Heapsort. Heapsort starts by turning an arbitrary array into a max-heap (by means of build_max_heap ). It then repeatedly takes the first element and swaps it with the “working” element in the tail, building a sorted array backwards. After each swap, it shifts the “front” of what is considered to be a heap (i.e., the mentioned above heap_size ), and what is an already sorted array suffix, and restores the heap structure up to this front. The following code the final addition to the Heaps functor:

Lecture Notes

119

let heapsort arr = let len = Array.length arr in let heap_size = ref len in build_max_heap arr; for i = len - 1 downto 1 do swap arr 0 i; heap_size := !heap_size - 1; max_heapify !heap_size arr 0; done

Heapsort Complexity The main bulk of complexity is taken by build_max_heap arr (which results in \(O(n \log n)\) ) and in running the loop. Since the loop runs \(n/2\) iteration, and each reconstruction of the tree takes \(O(\log n)\) swaps, the overall complexity of Heapsort is \(O(n \log n)\) .

Evaluating Heapsort We can now use our checked to make sure that it indeed delivers sorted arrays: module Checker = SortChecker(KV) let c = generate_key_value_array 1000 let d = Array.copy c The following are the results of the experiment: # # -

heapsort d;; : unit = () Checker.sorted_spec c d;; : bool = true

Which sorting algorithm to choose? By now we have seen three linearithmic sorting algorithms: merge sort, Quicksort and Heapsort. The first two achieve efficiency via divide-and-conquer strategy (structuring the computations in a tree). The last one exploits the properties of a maintained data structures (i.e., a heap), which also coincidentally turns out to be a tree. It would be interesting to compare the relative performance of the three implementations we have, by running them on three copies of the same array:

120

Lecture Notes

let x = generate_key_value_array 100000 let y = Array.copy x let z = Array.copy x let quicksort = kv_sort_asc Let us now time the executions: # time heapsort x;; Execution elapsed time: 0.511102 sec - : unit = () # time quicksort y;; Execution elapsed time: 0.145787 sec - : unit = () # time merge_sort z;; Execution elapsed time: 0.148201 sec - : unit = () We can repeat an experiment on a larger array (e.g., \(10^6\) elements): # time heapsort x;; Execution elapsed time: 6.943117 sec - : unit = () # time quicksort y;; Execution elapsed time: 2.049979 sec - : unit = () # time merge_sort z;; Execution elapsed time: 2.192766 sec - : unit = () As we can see, the relative performance of the three algorithms remains the same, with our implementation of heapsort being about 3.5 slower than both quicksort and merge_sort . The reason why quicksort beats heapsort by a constant factor is because is almost doesn’t perform “unnecessary” element swapping, which is time consuming. In contrast, even if all of the array is already ordered, heapsort is going to swap all of them in order to make a heap structure. However, on an almost-sorted array, heapsort will perform significantly better than quicksort and, unlike merge_sort , it will not require extra memory: # let x = generate_key_value_array 10000;; ... # time quicksort x;; Execution elapsed time: 0.014825 sec - : unit = () # time quicksort x;;

Lecture Notes

121 Execution elapsed time: 3.650797 sec - : unit = () # time heapsort x;; Execution elapsed time: 0.044624 sec - : unit = ()

Priority Queues • File: PriorityQueue.ml Recall our main motivation for studying binary heaps: efficient retrieval of an element with maximal/minimal key in an array, without re-sorting it from scratch between the changes. A data structure that allows for efficient retrieval of an element with the highest/lowest key is called a priority queue . In this section, we will design a priority queue based on the implementation of the heaps we already have. The priority queue will be implemented by a dedicated data type and a number of operations, all residing within the following functor: module PriorityQueue(C: CompareAndPrint) = struct (* To be filled *) end A carrier of a priority queue (i.e., a container for its elements) will be, of course, an array. Therefore, a priority queue may only hold as many elements as is the size of the array. We introduce a small encoding tweak, which will be very helpful for accounting for the fact that the array might be not fully filled, allowing the priority queue to grow (as more elements are added to it) and shrink (as the elements are extracted). Let us add the following definitions into the body of PriorityQueue : module COpt = struct type t = C.t option let | | | |

comp x y = match x, y with Some a, Some b -> C.comp a b None, Some _ -> -1 Some _, None -> 1 None, None -> 0

let pp x = match x with | Some x -> C.pp x

Lecture Notes

122 | None -> "None" end module H = Heaps(COpt) (* Do no inline, just include *) open H

The module COpt “lifts” the pretty-printer and the comparator of a given module C (of signature CompareAndPrint

), to the elements of type option . Specifically, if an element is None , it is

strictly smaller than any Some -like elements. As you can guess, the None elements will denote the “empty space” in our priority queue.

Creating Priority Queues The queue is represented by an OCaml record of the following shape (also to be added to the module): type heap = { heap_size : int ref; arr : H.t array } The records in OCaml are similar to those in C and are simply collections of named values (referred to as record fields ). Specifically, the record type heap pairs the carrier array arr of elements of type H.t (i.e., C.t lifted to an option), and the dedicated “heap threshold” heap_size

to determine which part of arr serves as a heap.

The following two functions allow to create an empty priority queue of a given size, and also turn an array into a priority queue (by effectively building a heap out of it): let mk_empty_queue size = assert (size >= 0); {heap_size = ref 0; arr = Array.make size None} (* Make a priority queue from an array *) let mk_queue a = let ls = List.map (fun e -> Some e) (to_list a) in let a' = list_to_array ls in build_max_heap a'; {heap_size = ref (Array.length a); arr = a'} Finally, the following construction allows to print out the contents of a priority queue by reusing the functor ArrayPrinter defined at the beginning of this chapter:

123

Lecture Notes

module P = ArrayPrinter(COpt) let print_heap h = P.print_array h.arr

Operations on Priority Queues The first and the simplest operation on a priority queue h is to take its highest-ranked element (i.e., the one with the greatest priority, expressed by means of its key value): let heap_maximum h = (h.arr).(0) The next operation allows not just look at, but also extract (i.e., fetch and remove) the maximal element from the priority queue: let heap_extract_max h = if !(h.heap_size) < 1 then None else let a = h.arr in let max = a.(0) in a.(0) = Array.length h.arr then raise (Failure "Maximal heap capacity reached!"); h.heap_size := hs + 1; heap_increase_key h hs (Some elem) It only succeeds in the case if there is still vacant space in the queue (i.e., at the end of the array), which can be determined by examining the heap_size field of h . If the space permits, the limit heap_size

is increased. Since we know that None used to be installed to the vacant place (which

is an invariant maintained by means of heap_size ), we can simply install the new element Some elem

(which is guaranteed to be larger than None as per our defined comparator) and let

the heap rebalance using heap_increase_key . Given the complexity of max_heap_insert , it is easy to show that the complexity of element insertion is \(O(\log n)\) . This brings us to an important property of priority queues implemented by means of heaps:  Complexity of priority queue operations For a priority queue of size \(n\) , • Finding the largest element has complexity \(O(1)\) , • Extraction of the largest element has complexity \(O(\log n)\) , • Insertion of a new element has complexity \(O(\log n)\) .

Working with Priority Queues Let us see a priority queue in action. We start by creating it from a randomly generated array:

Lecture Notes

125

module PQ = PriorityQueue(KV) open PQ let q = mk_queue ( [|(6, "egkbs"); (4, "nugab"); (4, "xcwjg"); (4, "oxfyr"); (4, "opdhq"); (0, "huiuv"); (0, "sbcnl"); (2, "gzpyp"); (4, "hymnz"); (2, "yxzro")|]);; Let us see what’s inside: # q;; - : PQ.heap = {heap_size = {contents = 10}; arr = [|Some (6, "egkbs"); Some (4, "nugab"); Some (4, "xcwjg"); Some (4, "oxfyr"); Some (4, "opdhq"); Some (0, "huiuv"); Some (0, "sbcnl"); Some (2, "gzpyp"); Some (4, "hymnz"); Some (2, "yxzro")|]} We can proceed by checking the maximum: # heap_maximum q;; - : PQ.H.t = Some (6, "egkbs") (* It is indeed a heap! *) # PQ.H.is_heap q.arr;; - : bool = true Let us extract several maximum elements: # # # # -

heap_extract_max q;; : PQ.H.t option = Some heap_extract_max q;; : PQ.H.t option = Some heap_extract_max q;; : PQ.H.t option = Some heap_extract_max q;; : PQ.H.t option = Some

(6, "egkbs") (4, "nugab") (4, "oxfyr") (4, "hymnz")

Is it still a heap?: # q;; - : PQ.heap = {heap_size = {contents = 6};

Lecture Notes

126 arr = [|Some (4, "opdhq"); Some (2, "yxzro"); Some (4, "xcwjg"); Some (0, "sbcnl"); Some (2, "gzpyp"); Some (0, "huiuv"); None; None; None; None|]} # PQ.H.is_heap q.arr;; - : bool = true Finally, let us insert a new element and check whether it is still a heap: # max_heap_insert q (7, "abcde");; - : unit = () # q;; - : PQ.heap = {heap_size = {contents = 7}; arr = [|Some (7, "abcde"); Some (2, "yxzro"); Some (4, "opdhq"); Some (0, "sbcnl"); Some (2, "gzpyp"); Some (0, "huiuv"); Some (4, "xcwjg"); None; None; None|]} # heap_maximum q;; - : PQ.H.t = Some (7, "abcde")

Exercises

Exercise 1 Answer the following small questions about heaps: 1. What is a maximal and a minimal number of elements in a heap of the height \(h\) ? Explain your answer and give examples. 2. Is an array that is sorted a min-heap? 3. Where in a max-heap might the elements with the smallest keys reside, assuming that all keys are distinct?

Exercise 2 • Let us remove a self-recursive call at the end of max_heapify . Give a concrete example of an array arr , which is almost a heap (with just one offending triple rooted at i ), such that the procedure recursively.

max_heapify (Array.length arr) arr i

does not restore a heap, unless run

127

Lecture Notes

• Rewrite max_heapify so it would use a while -loop instead of the recursion. Provide a variant for this loop.

Exercise 3 Implement in OCaml and check an invariant from Section Building a heap from an array . Explain how it implies the postcondition of build_max_heap (which should be expressed in terms of is_heap ).

Exercise 4 Implement in OCaml and check the invariant of the for -loop of heapsort. How does it imply the postcondition (i.e., that the whole array is sorted)? Hint: how does it relate the elements of the original array (you might need a copy of it), the sub-array before heap-size and the sub-array beyond the heap_size ?

Exercise 5 Reimplement the heapsort, so it would work with a min-heaps instead of max-heaps. For this, you might also reimplement or, better, generalise the prior definitions of the Heap module.

Exercise 6 Implement and test the invariant for the while -loop of Radix Sort .

Week 06: Abstract Data Types Equivalence Classes and Union-Find • File: UnionFind.ml An equivalence class is a set of elements related by a relation \(R\) , such that \(R\) is 1. reflexive (any element is related to itself) 2. symmetric (if \(p\) is related to \(q\) , then \(q\) is related to \(p\) ), and 3. transitive (if \(p\) is related to \(q\) and \(q\) is related to \(r\) , then \(p\) is related to \(r\) ).

Lecture Notes

128

Reasoning about inclusion of an element into a certain equivalence class within a set is a common problem in computing. For instance, it appears in the following domains: • Checking if a computer node is in a certain network segment • Checking whether two variables in the same program are equivalent (aliases) • Reasoning about mathematical sets. We are going to refer to equivalent elements (according to a certain equivalence relation) as to connected ones.

Union-Find Structure Union-Find is a data structure that allows to efficiently represent a finite set of n elements (encoded by a segment of integers 0 ... n - 1 ) with a possibility to answer the following questions: • Are elements i and j connected? • What is an equivalence class of an element i ? • How many equivalence classes are there in the given relation? In addition to those queries, union-find support modification of the current equivalence relation by taking a union of two classes, corresponding by elements i and j , therefore, possibly affecting the answers to the questions above. The definition of the Union-Find structure is very simple: module UnionFind = struct type t = { count : int ref; id : int array } (* More definitions come here *) end That is, it only stores a count of equivalence classes and an array, representing the elements. We can create a union-find for n elements by relying to the previously defined machinery from past lectures: let mk_UF n = let ints =

Lecture Notes

129 ArrayUtil.list_to_array (ArrayUtil.iota (n - 1)) in { count = ref n; id = ints } let get_count uf = !(uf.count)

Working with Sets via Union-Find The Union-Find structure is going to rely on an old trick — encoding certain information about elements in an array via their locations. In this particular case, once created, each location in a UnionFind’s “carrier” array determines an equivalence class, represented by an element itself. That is, for instance creating a UF structure via mk_UF 10 we create an equivalence relation with 10 classes, where each element is only connected to itself. However, in the future the class of an element might change, which will be reflected by changing the value in the corresponding array cell. More specifically, the dependencies in the arrays will be forming chains ending with a root — an element that is in its own position. Keeping this fact in mind — that all element-position chains eventually reach a fixed point (root), we can implement a procedure by determining the equivalence class of an element, as a fixed point in the corresponding chain: let find uf p = let r = ref p in while (!r uf.id.(!r)) do r := uf.id.(!r) done; !r let connected uf p q = find uf p = find uf q That is, to determine the class of an element p , the function find follows the chain that starts from it, via array indices, until it reaches a root, which corresponds to the “canonical element” of p ’s equivalence class. The intuition behind find becomes more clear once we see how union is implemented: let union uf p q = let i = find uf p in let j = find uf q in if i = j then () else begin uf.id.(i) find uf e = i) ids in if connected [] then begin Printf.printf "Class %d: [" i; List.iter (fun j -> Printf.printf "%d; " j) connected; print_endline "]" end done Let us run some experiments using utop : utop # open UnionFind;; utop # let uf = UnionFind.mk_UF 10;; val uf : t = {count = {contents = 10}; id = [|0; 1; 2; 3; 4; 5; 6; 7; 8; 9|]} utop # UnionFind.print_uf uf;; Class 0: [0; ] Class 1: [1; ] Class 2: [2; ] Class 3: [3; ] Class 4: [4; ] Class 5: [5; ] Class 6: [6; ] Class 7: [7; ] Class 8: [8; ] Class 9: [9; ] - : unit = () Now let us merge some equivalence classes: utop # union uf 0 1; union uf 2 3; union uf 4 5; union uf 6 7; union uf 8 9; union uf 1 8;;

131

Lecture Notes

- : unit = () utop # UnionFind.connected uf 0 9;; - : bool = true utop # print_uf uf;; Class 3: [2; 3; ] Class 5: [4; 5; ] Class 7: [6; 7; ] Class 9: [0; 1; 8; 9; ] - : unit = () We will make active use of the Union-Find structure in the future lectures.

Information Hiding and Abstraction Data structures provide an efficient way to represent information, facilitating access to it and its manipulation. However, it is not always desirable to let the client of a data structure know how exactly it is implemented. The mechanism for hiding the implementation details (or, alternatively information hiding ) is typically referred to as abstraction , and different programming languages provide various mechanisms to define abstractions. A data structure, once its implementation details are hidden, becomes an Abstract Data Type (ADT) — a representation of information, which only allows to manipulate with it by means of a welldefined interface, without exposing the details of how the information is structured. Most of abstract data types in computer science are targeted to represent, in a certain way, a set of elements, providing different interfaces (i.e., collections of functions/methods) to access elements of a set, add, and remove them. A choice of a particular ADT is usually dictated by the needs of a client program and the semantics of the ADT. For instance, some ADTs are designed to facilitate search of a particular element in a set (e.g., search trees), while the others provide a more efficient way to retrieve an element added most recently (e.g., stacks), and different applications might rely on either of those characteristic properties for the sake of correctness and/or efficiency. In this chapter, we will study several basic abstract data types, learn their properties and applications, and see how they can be implemented differently by means of data structures at hand.

Stacks • File: Stacks.ml

132

Lecture Notes

Stack is a good example of a simple abstract data type that implements a set (with possibly repeating elements) with a small number of operations for adding and removing elements. In doing so, stack provides two main operations pop and push that implement a discipline known as LIFO (last-in-first-out): an element added last is retrieved first.

The Stack interface A simple stack interface is described by the following OCaml module signature: module type AbstractStack = sig type 'e t val mk_stack : int -> 'e t val is_empty : 'e t -> bool val push : 'e t -> 'e -> unit val pop : 'e t -> 'e option end Notice that the first type member ( type 'e t ) is what makes this data type abstract. The type declaration stands for and “abstract type t of the stack storing elements of type 'e ”. In reality, the stack, as a data structure, can be implemented in various ways, but this type definition does not reveal those details. Instead, it provides four functions to manipulate with stacks — and this is the only vocabulary for doing so. Specifically: • mk_stack creates a new empty stack (hence the output result is 'e t ) with a suggested size n

• is_empty checks is the stack is empty • push adds new element to the top of the stack • pop removes the latest added element e from the top of the stack and returns Some e , if such element exists, or None if the stack is empty. The stack is then modified, so this element is removed. Unlike OCaml list, is a mutable structure. This means each “effectful” operation of it, such as push or pop , changes its contents, rather than returns a new copy, the result type of push is unit . Both push and pop , thus, modify the stack contents, in addition to returning a result (in the case of pop ).

An List-Based Stack Our first concrete implementation of a stack ADT exploits the fact that OCaml lists behave precisely like stacks, so we can build the following implementation almost effortlessly:

Lecture Notes

133

module ListBasedStack : AbstractStack = struct type 'e t = 'e list ref let mk_stack _ = ref [] let is_empty s = match !s with | [] -> true | _ -> false let push s e = let c = !s in s := e :: c let pop s = match !s with | h :: t -> s := t; Some h | _ -> None end What is important to notice is that type 'e t in the concrete implementation is defined to be 'e list ref

, so in the rest of the module we can use the properties of this concrete data type

(i.e., dereference it and work with it as with an OCaml list). Notice also that the concrete module ListBasedStack is annotated with the abstract signature AbstractStack , making sure that all definitions have the matching types. The implication of this is that no user of this module will be able to exploit the fact that our “stack type” is, in fact, a reference to an OCaml list. An example of such an “exploit” would be, for instance, making the stack empty foregoing the use of pop in order to deplete it first, element by element. When implementing your own concrete implementation of an abstract data type, it is recommended to ascribe the module signature (e.g., AbstractStack ) as the last step of your implementation. If you do it before the module is complete, the OCaml compiler/back-end will be complaining that your implementation of the module does not match the signature, which makes the whole development process less pleasant. Let us now test our stack ADT implementation by pushing and popping different elements, keeping in mind the expected LIFO behaviour. We start by reating an empty stack: # let s = ListBasedStack.mk_stack ();; val s : '_weak101 ListBasedStack.t = Notice that the type '_weak101 indicates that OCaml doesn’t yet know what is the type of stack elements, and it will be clear once we push the first one. Furthermore the type of the stack itself is presented as ListBasedStack.t , i.e., it is not shown to be a reference to list – what we defined it to be. Let us now push three elements to a stack and check it for emptiness: # # -

push s : unit push s : unit

(4, "aaa");; = () (5, "bbb");; = ()

Lecture Notes

134 # # -

push s (7, "ccc");; : unit = () is_empty s;; : bool = false

As the next step, we can start removing elements from the stack, making sure that they come up in the reverse order with respect to how they were added: # # # -

pop s;; : (int * string) option = Some (7, "ccc") pop s;; : (int * string) option = Some (5, "bbb") pop s;; : (int * string) option = Some (4, "aaa")

Finally, we can test that, after we’ve removed all initially added elements, the stack is empty and remains this way: # # -

pop s;; : (int * string) option = None pop s;; : (int * string) option = None

An Array-Based Stack An alternative implementation of stacks uses an array of some size n , thus requiring constantsize memory. A natural shortcoming of such a solution is the fact that the stack can hold only up to n elements. However, the advantage is that one can implement such a stack in language that do not provide algebraic lists, but only provide arrays (e.g., C): module ArrayBasedStack : AbstractStack = struct type 'e t = { elems : 'e option array; cur_pos : int ref } (* More functions to be added here *) end The abstract type 'e t is now defined quite differently — it is a record that stores two fields. The first one is an array of options of elements of type 'e (representing the elements of the stack in a desired order), while the second one is a pointer to the position cur_pos at which the next element of the stack must be added. Defining the stack this way, we agree on the following invariant: the “empty” elements in a stack are represented by None , which the array, serving as a

Lecture Notes

135

“carrier” for the stack will be filled with elements from its beginning, with cur_pos pointing to the next empty position to fill. For instance, a stack with the maximal capacity of 3 elements, with the elements "a" and "b" will be represented by the array [|Some "b"; Some "a"; None|] , with cur_pos

being 2 , indicating the next slot to insert an element.

In order to make a new stack, we create a fixed-length array for size n , setting cur_ref to point to 0: let mk_stack n = { elems = Array.make n None; cur_pos = ref 0 } We can also use cur_pos to determine whether the stack is empty or not: let is_empty s = !(s.cur_pos) = 0 Pushing a new element requires us to insert a new element into the next vacant position in the “carrier” array and then increment the current position. If the current position points outside of the scope of the array, it means that the stack is full and cannot accommodate more elements, so we just throw an exception: let push s e = let pos = !(s.cur_pos) in if pos >= Array.length s.elems then raise (Failure "Stack is full") else (s.elems.(pos) 0 , or None otherwise: let pop s = let pos = !(s.cur_pos) in let elems = s.elems in if pos 'e t val is_empty : 'e t -> bool val is_full : 'e t -> bool val enqueue : 'e t -> 'e -> unit val dequeue : 'e t -> 'e option val queue_to_list : 'e t -> 'e list end Indeed, one is at freedom to decide which functionality should be added to an ADT interface — a point we demonstrate by making the queue signature a bit more expressive, in terms of functionality it provides, than a stack interface.

Lecture Notes

137

As in the example of stacks, a queue of elements of type 'e is represented by an abstract parameterised type 'e t . Two methods, is_empty and is_full allow one to check whether it’s empty or full, correspondingly. enqueue and dequeue provide the main FIFO functionality of the queue: the former adds elements to the “back” of the queue object, while the latter removes elements from its “front”. Finally, the utility method queue_to_list transforms a current snapshot of the queue to an immutable OCaml list. Similarly, to the stack ADT, queues defined by means of the Queue signature are mutable, i.e., functions enqueue and dequeue modify the contents of a queue in-place rather than create a new queue.

An Array-Based Queue The following module implements a queue based on a finite-size array: module ArrayBasedQueue : Queue = struct type 'e t = { elems : 'e option array; head : int ref; tail : int ref; size : int } let mk_queue sz = { elems = Array.make sz None; head = ref 0; tail = ref 0; size = sz } (* More functions come here *) end Since a queue, unlike stack, can be changed on both sides, “front” and “back”, the empty slots may appear both in the beginning and at the end of its carrier array. In order to utilise the array efficiently, we will engineer our concrete implementation, so it would “wrap” around and use the empty array cells in the beginning. In our representation the head pointer points to the next element to be removed via dequeue , while tail points to the next array cell to install an element (unless the queue is full). This implementation requires some care in managing the head/tail references. For instance, both empty and fully packed queue are characterised by head and tail pointing to the same array cell: let is_empty q = !(q.head) = !(q.tail) &&

Lecture Notes

138 q.elems.(!(q.head)) = None let is_full q = !(q.head) = !(q.tail) && q.elems.(!(q.head)) None

The only difference is that in the case of the queue being full that cell, at which both head and tail point is occupied some element (and, hence, is not None ), whereas it is None if the queue is empty. Adding and removing elements to/from the queue is implemented in a way that takes the “wrapping” around logic into the account. For instance, enqueue checks whether the queue is full and whether the tail reference has reached the end of the array. In case if it has, but the queue still has slots to add elements, it “wraps around” by setting tail to be 0 (i.e., point to the beginning of the array): let enqueue q e = if is_full q then raise (Failure "The queue is full!") else ( let tl = !(q.tail) in q.elems.(tl) insert_after t n; | None -> ()); q.tail := Some n Dequeueing an element simply returns the payload of the node pointed to by head and moves the references to its successor: let dequeue q = match !(q.head) with | None -> None | Some n -> let nxt = next n in q.head := nxt;

Lecture Notes

145 remove n; (* This is not necessary, but helps GC *) Some (value n)

The removal of an node n on the penultimate line of dequeue is not necessary for the correctness of the data structure, but it helps to save the memory. To understand why it is essential, we need to know a bit about how Tracing Garbage Collector works in OCaml. While the garbage collection and automated memory management are outside of the scope of this course, let us just notice that not removing the node will make OCaml runtime treat it as being in use (as it is reachable from its successor), and hence keep it in memory, which could be otherwise used for something else. A conversion to list is almost trivial, given the functionality of a doubly-linked list: let queue_to_list q = match !(q.head) with | None -> [] | Some n -> to_list_from n Now, with this definition complete, we can do some experiments. First, as before, let us define a printer for the contents of the queue: module DLQPrinter = QueuePrinter(DLLBasedQueue) let pp (k, v) = Printf.sprintf "(%d, %s)" k v let print_kv_queue q = DLQPrinter.print_kv_queue q pp Finally, let us put and remove some elements from the queue: # let dq = DLLBasedQueue.mk_queue 0;; val dq : '_weak105 DLLBasedQueue.t = # let = a generate_key_value_array 10;; - : (int * string) array = [|(7, "sapwd"); (3, "bsxoq"); (0, "lfckx"); (7, "nwztj"); (5, "voeed"); (9, "jtwrn"); (8, "zovuq"); (4, "hgiki"); (8, "yqnvq"); (3, "gjmfh")|] Similarly to previous examples, we will up the queue from a randomly generated array: # for i = 0 to 9 do enqueue dq a.(i) done;; - : unit = () # print_kv_queue dq;; [(7, sapwd); (3, bsxoq); (0, lfckx); (7, nwztj); (5, voeed); (9,

Lecture Notes

146 jtwrn); (8, zovuq); (4, hgiki); (8, yqnvq); (3, gjmfh); ] - : unit = () We can then ensure that the elements come out in the order they were added: # is_empty dq;; - : bool = false # dequeue dq;; - : (int * string) option = Some (7, # dequeue dq;; - : (int * string) option = Some (3, # dequeue dq;; - : (int * string) option = Some (0, # enqueue dq (13, "lololo");; - : unit = () # print_kv_queue dq;; [(7, nwztj); (5, voeed); (9, jtwrn); yqnvq); (3, gjmfh); (13, lololo); ] - : unit = ()

"sapwd") "bsxoq") "lfckx")

(8, zovuq); (4, hgiki); (8,

Exercises Exercise 1 Implement a queue data structure, which does not use OCaml arrays or double-linked lists, and at most two values of type ref . Make sure it satisfies the Queue interface. To do so, use two OCaml lists to represent the part for “enqueueing” and “dequeueing”. What happens if one of them gets empty? Argue that the average-case complexity for enqueue and dequeue operations of your implementation is linear.

Exercise 2 Implement a procedure for “reversing” a doubly-linked list, starting from its arbitrary node (which might be at its beginning, end, or in the middle). Make sure that your procedure works in linear time.

Midterm Project: Memory Allocation and Reclamation • Midterm project starter code

147

Lecture Notes

This midterm project will consist of two parts: team-based coding assignments and individual implementation reports.

Coding Assignment How do we implement references and pointers in languages that do not provide them? In this project you will be developing a solution for implementing linked data structures (such as lists and queues) without relying to OCaml’s explicit ref type, by means of implementing a custom Cstyle memory allocator . In order to encode the necessary machinery for dynamically creating references, we notice that one can represent a collection of similar values (e.g., of type int or string ) by packaging them into arrays, so such arrays will play the role of random-access memory. For instance, two consecutive nodes with the payloads (15, "a") and (42, "b") of a doubly-linked list containing pairs of integers can be encoded by sub-segments of following three arrays: one for pointer “addresses”, one for integers, and one for strings:

A list “node” ( dll_node ) is simply a segment of four consecutive entries in a pointer array, with the corresponding links to an integer and a string part of the payload. Therefore, in order to work with a doubly-linked list represented via three arrays, one should manipulate with the encoding of references in by means of changing the contents of those arrays.

148

Lecture Notes

The template code for the project can be obtained via the link available on Canvas, In this project, you are expected to deliver the following artefacts: • An implementation of an array-based memory allocator that can provide storage (of a fixed limited capacity) for dynamically “allocated” pointers, integers, and strings, with a possibility of updating them. Similarly to languages without automatic memory management, such as C, it should be possible to both allocate and “free” consecutive pointer segments, making it possible to reuse the memory (i.e., “reclaim” it by the allocator). • An implementation of a doubly-linked list, built on top of the allocator interface via the abstract “heap” it provides and the operations for manipulating with the pointers. Crucially, the list should free the memory occupied by its nodes, when the nodes are explicitly removed. • An implementation of a queue data type (taking int * string pairs), following the module signature from Section Queues and tests for checking that it indeed behaves like a queue. As your queue will not be polymorphic and only be able to accept elements of a specific type, it needs to implement a specialised signature: module type Queue = sig type t val mk_queue : int -> t val is_empty : t -> bool val is_full : t -> bool val enqueue : t -> (int * string) -> unit val dequeue : t -> (int * string) option val queue_to_list : t -> (int * string) list end The nature of the task imposes some restrictions and hints some observations: • You may not use OCaml’s references (i.e., values of type ref ) in your implementation. • As you remember, pointers and arrays are somewhat similar. Specifically, most of the pointer operations expect not just the pointer p value but also a non-negative integer “offset” o , so that the considered value is located by the “address” p + o . • The allocator only has to provide storage and the machinery to manipulate references storing (a) integers, (b) strings, and (c) pointers which can point to either of the three kinds of values. You are not expected to support references to any other composite data types (such as, e.g., pairs). However, you might need to encode those data types using consecutive pointers with offsets. More hints on the implementation are provided in the README.md file of the repository template. This part of the assignment is to be completed in groups of two. Please, follow the instruction on Canvas to create a team on GitHub Classroom and make a submission of your code on Canvas. For

149

Lecture Notes

this part of the assignment, both participants will be awarded the same grade, depending on how well their code implements the tasks and also on the quality of the provided tests. Feel free to think on how to split the implementation workload within your team.

Report The reports are written and submitted on Canvas individually. They should focus on the following aspects of your experience with the project: • High-level overview of your design of the allocator implementation. How did you define its basic data structures, what were the algorithmic decisions you’ve taken for more efficient memory management? Please, don’t quote the code verbatim at length (you may provide 3-4 line code snippets, if necessary). Pictures and drawings are welcome, but are not strictly required. • What you considered to be the essential properties of your allocator implementation of the data structures that rely on it? How did you test those properties? • How the design and implementation effort has been split between the team members, and what were your contributions? • Any discoveries, anecdotes, and gotchas, elaborating on your experience with this project. You individual report should not be very long; please, try to make it succinct and to the point; 3-4 pages should be enough. The reports will be graded based on their clarity, demonstration of your understanding of the project design, and how fun they are to read (the report should not simply paraphrase the code).

Week 07: Hashing-Based Data Structures Hash-tables • File: HashTables.ml Hash-tables generalise the ideas of ordinary arrays and also (somewhat surprisingly) bucket-sort, providing an efficient way to store elements in a collection, addressed by their keys, with average \(O(1)\) complexity for inserting, finding and removing elements from the collection.

150

Lecture Notes

Allocation by hashing keys At heart of hash-tables is the idea of a hash-function — a mapping from elements of a certain type to randomly distributed integers. This functionality can be described by means of the following OCaml signature: module type Hashable = sig type t val hash : t -> int end Designing a good hash-function for an arbitrary data type (e.g., a string) is highly non-trivial and is outside of the scope of this course. The main complexity is to make it such that “similar” values (e.g., s1 = "aaa" and s2 = "aab" ) would have very different hashes (e.g., hash s1 = 12423512 and s2 = 99887978 ), thus providing a uniform distribution. It is not required for a hash-function to be injective (i.e., it may map different elements to the same integer value — phenomenon known as hash collision ). However, for most of the purposes of hash-functions, it is assumed that collisions are relatively rare.

Operations on hash-tables As we remember, in arrays, elements are indexed by integers ranging form 0 to the size of the array minus one. Hash-tables provide an interface similar to arrays, with the only difference that any type t can be used as keys for indexing elements (similarly to integers in an array), as long as there is an implementation of hash available for it. An interface of a hash-table is thus parameterised by the hashing strategy, used for its implementation for a specific type of keys . The following module signature the types and operations over a hash table: module type HashTable = functor (H : Hashable) -> sig type key = H.t type 'a hash_table val mk_new_table : int -> (key * 'v) hash_table val insert : (key * 'v) hash_table -> key -> 'v -> unit val get : (key * 'v) hash_table -> key -> 'v option val remove : (key * 'v) hash_table -> key -> unit end As announced key specifies the type of keys, used to refer to elements stored in a hash table. One can create a new hash-table of a predefined size (of type int ) via mk_new_table . The next three functions provide the main interface for hash-table, allowing to insert and retrieve elements for a given key, as well as remove elements by key, thus, changing the state of the hash table (hence the return type of remove is unit ).

Lecture Notes

151

Implementing hash-tables Implementations of hash-table build on a simple idea. In order to fit an arbitrary number of elements with different keys into a limited-size array, one can use a trick similar to bucket sort, enabled by the hashing function: • Compute (hash k) mod n to compute the slot (aka bucket ) in an array of size n for inserting an element with a key k ; • if there are already elements in this bucket, add the new one, together with the old ones, storing them in a list. Then, when trying to retrieve an element with a key k , one has to • Compute (hash k) mod n to compute the bucket where the element is located; • Go through the bucket with a linear search, finding the element whose key is precisely k . That is, it is okay for elements with different keys to collide on the same bucket, as more elaborated search will be performed in each bucket. Why hash-tables are so efficient? As long as the size of the carrier array is greater or roughly the same as the number of inserted elements so far, and there were not many collisions, we can assume that each bucket has a very small number of elements (for which the collisions have happened while determining their bucket). Therefore, as long as the size of a bucket is limited by a certain constant, the search will boil down to (a) computing a bucket for a key in a constant time and (b) scanning the bucket for the right element, both operations yielding \(O(1)\) complexity. Let us start by defining a simple hash-table that uses lists to represent buckets: module ListBasedHashTable : HashTable = functor (H : Hashable) -> struct type key = H.t type 'v hash_table = { buckets : 'v list array; size : int } (* More functions are coming *) end Making a new hash table can be done by simply allocating a new array:

152

Lecture Notes

let mk_new_table size = let buckets = Array.make size [] in {buckets = buckets; size = size} Inserting an element follows the scenario described above. List.filter is used to make sure that no elements with the same key are lingering in the same bucket: let insert ht k v = let hs = H.hash k in let bnum = hs mod ht.size in let bucket = ht.buckets.(bnum) in let clean_bucket = List.filter (fun (k', _) -> k' k) bucket in ht.buckets.(bnum) Some v | _ -> None Finally, removing an element is similar to inserting a new one: let remove ht k = let hs = H.hash k in let bnum = hs mod ht.size in let bucket = ht.buckets.(bnum) in let clean_bucket = List.filter (fun (k', _) -> k' k) bucket in ht.buckets.(bnum) (key * 'v) hash_table val insert : (key * 'v) hash_table -> key -> 'v -> unit val get : (key * 'v) hash_table -> key -> 'v option val remove : (key * 'v) hash_table -> key -> unit val print_hash_table : (key -> string) -> ('v -> string) -> (key * 'v) hash_table -> unit end For design reasons that will become clear further in this lecture, we still mention the type key of keys separately in the signature. We also add a convenience function print_hash_table to output the contents of the data structure.

A framework for testing hash-tables Before we re-define the hash-table, let us define a module for automated testing of hash-tables. The module starts with the following preamble:

Lecture Notes

155

module HashTableTester (H : HashTable with type key = int * string) = struct module MyHT = H open MyHT (* More functions will come here. *) end Notice that it is a functor that takes an implementation of hash-table module H , but requires the keys to be of a specific type int * string . The following function will fill the hash-table from an array: let mk_test_table_from_array_length a m = let n = Array.length a in let ht = mk_new_table m in for i = 0 to n - 1 do insert ht a.(i) a.(i) done; (ht, a) The next function takes a hash-table ht and an array a used for filling it with elements, and tests that all elements in the array are also in the hash-table (we optimistically assume that an array does not have repetitions): let test_table_get ht a = let len = Array.length a in for i = 0 to len - 1 do let e = get ht a.(i) in assert (e None); let x = get_exn e in assert (x = a.(i)) done; true

A simple list-based hash-table With the new signature at hand, let us now redefine a simple implementation of a list-based hash-table. Even though not strictly necessary at the moment, we are going to make the type of keys used by the hash-table implementation explicit, and expose in the following signature:

Lecture Notes

156

module type KeyType = sig type t end The reason why we need to do it will become in the next Section Bloom Filters and Their Applications , in which we will need to be able to introspect on the structure of the keys, prior to instantiating a hash-table. We proceed with the fining our simple hash-table based on lists as previously: module SimpleListBasedHashTable(K: KeyType) = struct type key = K.t type 'v hash_table = { buckets : 'v list array; capacity : int; } let mk_new_table cap = let buckets = Array.make cap [] in {buckets = buckets; capacity = cap} let insert ht k v = let hs = Hashtbl.hash k in let bnum = hs mod ht.capacity in let bucket = ht.buckets.(bnum) in let clean_bucket = List.filter (fun (k', _) -> k' k) bucket in ht.buckets.(bnum) Some v | _ -> None (* Slow remove - introduce for completeness *) let remove ht k = let hs = Hashtbl.hash k in let bnum = hs mod ht.capacity in let bucket = ht.buckets.(bnum) in let clean_bucket = List.filter (fun (k', _) -> k' k) bucket in ht.buckets.(bnum) acc ^ (sprintf "(%s, %s); ") (ppk k) (ppv v)) "" bucket in printf "%d -> [ %s]\n" i s) done Let us now instantiate the table to use pairs of type int * string as keys, as well as the corresponding testing framework developed above: module IntKey = struct type t = int end module SHT = SimpleListBasedHashTable(IntKey) module SimpleHTTester = HashTableTester(SHT) let pp_kv (k, v) = Printf.sprintf "(%d, %s)" k v We can now create a simple hash-table and observe its contents: utop # let a = generate_key_value_array 15;; val a : (int * string) array = [|(7, "ayqtk"); (12, "kemle"); (6, "kcrtm"); (1, "qxcnk"); (3, "czzva"); (4, "ayuys"); (6, "cdrhf"); (6, "ukobi"); (10, "hwsjs"); (13, "uyrla"); (2, "uldju"); (5, "rkolw"); (13, "gnzzo"); (4, "nksfe"); (7, "geevu")|] utop # let t = SimpleHTTester.mk_test_table_from_array_length a 10;; val t : (SHT.key * SHT.key) SHT.hash_table = ... utop # SimpleHTTester.MyHT.print_hash_table pp_kv pp_kv t;; Capacity: 10 Buckets:

Lecture Notes

158

0 -> [ ((7, geevu), (7, geevu)); ((3, czzva), (3, czzva)); ((12, kemle), (12, kemle)); ] 1 -> [ ((7, ayqtk), (7, ayqtk)); ] 2 -> [ ((13, uyrla), (13, uyrla)); ((6, cdrhf), (6, cdrhf)); ] 6 -> [ ((13, gnzzo), (13, gnzzo)); ] 7 -> [ ((5, rkolw), (5, rkolw)); ((6, ukobi), (6, ukobi)); ((1, qxcnk), (1, qxcnk)); ((6, kcrtm), (6, kcrtm)); ] 8 -> [ ((4, ayuys), (4, ayuys)); ] 9 -> [ ((4, nksfe), (4, nksfe)); ((2, uldju), (2, uldju)); ((10, hwsjs), (10, hwsjs)); ] As we can see, due to hash collisions some buckets are not used at all (e.g., 3 ), while others hold multiple values (e.g., 9 ).

Testing a Simple Hash-Table We can also add a number of tests for the implementation of our hash-table. For instance, the following test checks that the hash table stores all (distinct) elements of a randomly generated array: let%test "ListBasedHashTable insert" = let open SimpleHTTester in let a = generate_key_value_array 1000 in let ht = mk_test_table_from_array_length a 50 in test_table_get ht a

A Resizable hash-table Let us change the implementation of a hash-table, so it could grow, as the number of the added elements greatly exceeds the number of buckets. We start from the following definition in the module: module ResizableListBasedHashTable(K : KeyType) = struct type key = K.t type 'v hash_table = { buckets : 'v list array ref; size : int ref; capacity : int ref; } let mk_new_table cap = let buckets = Array.make cap [] in {buckets = ref buckets; capacity = ref cap; size = ref 0}

Lecture Notes

159

(* More functions are coming here *) end That is, the hash table now includes its own capacity (a number of buckets), along with the size

(a number of stored elements). Both are subject of future change, as more elements are

added, and the table is resized. Adding new elements by means of insert can now trigger the growth of the hash-table structure. Since it is convenient to define resizing by means of insertion into a new hash-table, which is going to be then swapped with the previous one, we define those two functions as mutually recursive via OCaml’s let rec ... and ... construct: let rec insert ht k v = let hs = Hashtbl.hash k in let bnum = hs mod !(ht.capacity) in let bucket = !(ht.buckets).(bnum) in let clean_bucket = List.filter (fun (k', _) -> k' k) bucket in let new_bucket = (k, v) :: clean_bucket in !(ht.buckets).(bnum) !(ht.capacity) + 1 then resize_and_copy ht and resize_and_copy ht = let new_capacity = !(ht.capacity) * 2 in let new_buckets = Array.make new_capacity [] in let new_ht = { buckets = ref new_buckets; capacity = ref new_capacity; size = ref 0; } in let old_buckets = !(ht.buckets) in let len = Array.length old_buckets in for i = 0 to len - 1 do let bucket = old_buckets.(i) in List.iter (fun (k, v) -> insert new_ht k v) bucket done; ht.buckets := !(new_ht.buckets); ht.capacity := !(new_ht.capacity); ht.size := !(new_ht.size)

160

Lecture Notes

Fetching elements from a resizable hash-table is not very different from doing so with a simple hash table that does not re-size: let get ht k = let hs = Hashtbl.hash k in let bnum = hs mod !(ht.capacity) in let bucket = !(ht.buckets).(bnum) in let res = List.find_opt (fun (k', _) -> k' = k) bucket in match res with | Some (_, v) -> Some v | _ -> None Removal of elements requires a bit of care, so the size ht.size of the table (i.e., the number of elements it contains) would be suitably decreased: (* Slow remove - introduce for completeness *) let remove ht k = let hs = Hashtbl.hash k in let bnum = hs mod !(ht.capacity) in let bucket = !(ht.buckets).(bnum) in let clean_bucket = List.filter (fun (k', _) -> k' k) bucket in !(ht.buckets).(bnum) List.length clean_bucket then ht.size := !(ht.size) - 1); assert (!(ht.size) >= 0) Finally, printing is defined in almost the same way as before: let print_hash_table ppk ppv ht = let open Printf in print_endline @@ sprintf "Capacity: %d" !(ht.capacity); print_endline @@ sprintf "Size: %d" !(ht.size); print_endline "Buckets:"; let buckets = !(ht.buckets) in for i = 0 to !(ht.capacity) - 1 do let bucket = buckets.(i) in if bucket [] then ( (* Print bucket *) let s = List.fold_left (fun acc (k, v) -> acc ^ (sprintf "(%s, %s); ") (ppk k) (ppv v)) "" bucket in printf "%d -> [ %s]\n" i s) done Let us experiment with the resizable implementation by means of defining the following modules:

Lecture Notes

161

module RHT = ResizableListBasedHashTable(IntKey) module ResizableHTTester = HashTableTester(RHT) Let us see how the table grows: utop # let a = generate_key_value_array 20;; val a : (int * string) array = [|(17, "hvevv"); (9, "epsxo"); (14, "prasb"); (5, "ozdnt"); (10, "hglck"); (18, "ayqtk"); (4, "kemle"); (11, "kcrtm"); (14, "qxcnk"); (19, "czzva"); (4, "ayuys"); (7, "cdrhf"); (5, "ukobi"); (19, "hwsjs"); (3, "uyrla"); (0, "uldju"); (7, "rkolw"); (6, "gnzzo"); (19, "nksfe"); (4, "geevu")|] utop # let t = ResizableHTTester.mk_test_table_from_array_length a 5;; val t : (SHT.key * SHT.key) RHT.hash_table = ... size = {contents = 20}; capacity = {contents = 20}} utop # RHT.print_hash_table pp_kv pp_kv t;; Capacity: 20 Size: 20 Buckets: 2 -> [ ((14, qxcnk), (14, qxcnk)); ] 3 -> [ ((7, rkolw), (7, rkolw)); ((0, uldju), (0, uldju)); ((19, hwsjs), (19, hwsjs)); ] 4 -> [ ((19, nksfe), (19, nksfe)); ((4, kemle), (4, kemle)); ((18, ayqtk), (18, ayqtk)); ((5, ozdnt), (5, ozdnt)); ] 5 -> [ ((19, czzva), (19, czzva)); ] 6 -> [ ((3, uyrla), (3, uyrla)); ] 8 -> [ ((4, ayuys), (4, ayuys)); ] 9 -> [ ((6, gnzzo), (6, gnzzo)); ] 10 -> [ ((17, hvevv), (17, hvevv)); ((7, cdrhf), (7, cdrhf)); ] 11 -> [ ((14, prasb), (14, prasb)); ] 12 -> [ ((11, kcrtm), (11, kcrtm)); ] 13 -> [ ((5, ukobi), (5, ukobi)); ] 16 -> [ ((9, epsxo), (9, epsxo)); ] 17 -> [ ((4, geevu), (4, geevu)); ((10, hglck), (10, hglck)); ] To

emphasise,

even

though

we

have created the table with capacity 5 (via mk_test_table_from_array_length a 5 ), it has then grew, as more elements were added, so its

capacity has quadrupled, becoming 20. We can also test a resizable implementation of a hash table similarly to how we tested a simple one:

Lecture Notes

162

let%test "ResizableHashTable insert" = let open ResizableHTTester in let a = generate_key_value_array 1000 in let ht = mk_test_table_from_array_length a 50 in test_table_get ht a

Comparing performance of different implementations Which implementation of a hash-table behaves better in practice? We are going to answer this questions by setting up an experiment. For this, we define the following two functions for stresstesting our two implementations: let insert_and_get_bulk_simple a m = Printf.printf "Creating simple hash table:\n"; let ht = time (SimpleHTTester.mk_test_table_from_array_length a) m in Printf.printf "Fetching from simple hash table on the array of size %d:\n" (Array.length a); let _ = time SimpleHTTester.test_table_get ht a in () let insert_and_get_bulk_resizable a m = Printf.printf "Creating resizable hash table:\n"; let ht = time (ResizableHTTester.mk_test_table_from_array_length a) m in Printf.printf "Fetching from resizable hash table on the array of size %d:\n" (Array.length a); let _ = time ResizableHTTester.test_table_get ht a in () The

next

function

is

going

insert_and_get_bulk_resizable

to

run

both

insert_and_get_bulk_simple

and

on the same array (of a given size n ), creating two hash-tables

of the initial size m and measuring • 1. How long does it take to fill up the table, and • 1. How long does it take to fetch the elements This is done as follows: let compare_hashing_time n m = let a = generate_key_value_array n in insert_and_get_bulk_simple a m; print_endline ""; insert_and_get_bulk_resizable a m;

Lecture Notes

163

When the number of buckets is of the same order of magnitude as the number of items being inserted, the simple hash-table exhibits performance better than the resizable one (as resizing takes considerable amount of time): utop # compare_hashing_time 10000 1000;; Creating simple hash table: Execution elapsed time: 0.005814 sec Fetching from simple hash table on the array of size 10000: Execution elapsed time: 0.000000 sec Creating resizable hash Execution elapsed time: Fetching from resizable Execution elapsed time:

table: 0.010244 sec hash table on the array of size 10000: 0.000000 sec

However, for a number of buckets much smaller than the number of elements to be inserted, the benefits of dynamic resizing become clear: utop # compare_hashing_time 25000 50;; Creating simple hash table: Execution elapsed time: 0.477194 sec Fetching from simple hash table on the array of size 25000: Execution elapsed time: 0.000002 sec Creating resizable hash Execution elapsed time: Fetching from resizable Execution elapsed time:

table: 0.020068 sec hash table on the array of size 25000: 0.000000 sec

Bloom Filters and Their Applications • File: BloomFilters.ml Hashing can be useful for other applications besides distributing elements in an array when implementing a hash-table. For instance, we can also employ hashing for compactly representing the information of whether a certain element is or is not in a given set. To do so, let us first the introduce the following notion dealing with algorithms and data structures occasionally giving “wrong” answers.  True Negatives and False Positives Let us notice that some applications do not require to always have the correct answer to the question

Lecture Notes

164 “Whether the element e is in the set s ”? Imagine that we have a data structure, such that

• when the data structure answers “no” to the above question, it means “the element e is certainly not in the set s ”, but • when it answers “yes” the above question, it means that “the element e might or might not be in set s ”. This behaviour of the data structure is typically called as “sound-but-incomplete”. The first scenario (precise no-answer) is called “true negatives”, while the case of the second scenario, in which the answer “yes” is given for a certain element not in the set is called “false positive” (if the element is in the set, it would be “true positive”).

Data structures that experience false positives, but no false negatives (the answer “no” is precise), while providing a compact representation, are very useful and are employed in applications, that might tolerate imprecise “yes”-answers, given conservatively. In this section, we will study one of such data structures called Bloom filter — a compact representation of a “lossy” set that provides precisely this functionality. Bloom filters are widely used in practice: • Google Chrome web browser used to use a Bloom filter to identify malicious URLs. • Medium uses Bloom filters to avoid recommending articles a user has previously read. • Bitcoin uses Bloom filters to speed up wallet synchronisation.

High-level intuition Bloom filter is a very simple data structure, which uses hashing. It is represented by a large boolean/bit array (you can think of it of an array of 0s and 1s) of size m , and a finite number k of different hash-functions, which map elements to be added to a set of interest to int (as usual). For each new element to be added to the set, all k hash functions are used to determine specific bits, corresponding to this element in the array. The combination of those positions is the element’s “image”. For instance, the following image shows a Bloom filter with m = 15 and k = 3

, with an example of three elements, X, Y, and Z, added to it.

165

Lecture Notes

To determine whether an element is in the set, one needs to compute its k hashes, and check the bits in the corresponding array in a constant time ( \(O(k)\) ). Having more than 1 hash function reduces the risk of collision if the number of elements is smaller than the size of the filter, however, two or more different elements can indeed have all similar k hashes. Elements are never removed from a Bloom filter.

Bloom filter signature Let us first define the Bloom filter signature. It starts from the module describing the type of its elements and the list of hash functions: module type BloomHashing = sig type t val hash_functions : (t -> int) list end The Bloom filter itself is a functor, parameterised by BloomHashing : module type BloomFilter = functor (H: BloomHashing) -> sig type t val mk_bloom_filter : int -> t val insert : t -> H.t -> unit val contains : t -> H.t -> bool val print_filter : t -> unit end

Lecture Notes

166

Implementing a Bloom filter The implementation of the Bloom filter is simply an array of booleans (which we use to represent 0/1-bits) of a fixed size: module BloomFilterImpl : BloomFilter = functor (H: BloomHashing) -> struct (* Type of filter *) type t = { slots : bool array; size : int } (* Functions come here *) end Creation of a Bloom filter is trivial: let mk_bloom_filter n = let a = Array.make n false in {slots = a; size = n} Insertion amounts to computing all hashes for the element and setting the corresponding array bits to true : let insert f e = let n = f.size in List.iter (fun hash -> let h = (hash e) mod n in f.slots.(h) let h = (hash e) mod n in res := !res && f.slots.(h)) H.hash_functions; !res We can implement a printer for the Bloom filter by means of one of the previous modules:

167

module BP = ArrayPrinter(struct type t = bool let pp b = if b then "1" else "0" end) let print_filter t = let open BP in print_array t.slots

Experimenting with Bloom filters Let us fix a hashing strategy for our favourite data type int * string : module IntStringHashing = struct type t = int * string let hash1 (k, _) = Hashtbl.hash k let hash2 (_, v) = Hashtbl.hash v let hash3 (k, _) = k let hash_functions = [hash1; hash2; hash3] end Instantiating the filter: module IntStringFilter = BloomFilterImpl(IntStringHashing) Filling a filter from an array: let fill_bloom_filter m n = let open IntStringFilter in let filter = mk_bloom_filter m in let a = generate_key_value_array n in for i = 0 to n - 1 do insert filter a.(i) done; (filter, a) Let’s do some experiments: utop # let (f, a) = fill_bloom_filter 20 10;; val f : IntStringFilter.t = val a : (int * string) array = [|(4, "ayuys"); (7, "cdrhf"); (4, "ukobi"); (5, "hwsjs"); (8, "uyrla"); (0, "uldju"); (3, "rkolw"); (7, "gnzzo"); (7, "nksfe"); (4, "geevu")|]

Lecture Notes

Lecture Notes

168

utop # IntStringFilter.contains f (3, "rkolw");; - : bool = true utop # IntStringFilter.contains f (13, "aaa");; - : bool = false utop # IntStringFilter.print_filter f;; [| 1; 0; 0; 1; 1; 1; 0; 1; 1; 1; 1; 0; 1; 0; 1; 1; 0; 1; 1; 0 |] - : unit = ()

Testing Bloom Filters Testing for no true positive: let%test "bloom filter true positives" = let open IntStringFilter in let fsize = 2000 in let len = 1000 in let (f, a) = fill_bloom_filter fsize len in for i = 0 to len - 1 do assert (contains f a.(i)) done; true Testing for true negatives: let%test "bloom filter true negatives" = let open IntStringFilter in let fsize = 2000 in let len = 1000 in let (f, a) = fill_bloom_filter fsize len in let al = array_to_list 0 len a in

let b = generate_key_value_array len in for i = 0 to len - 1 do let e = b.(i) in if (not (contains f e)) then assert (not (List.mem e al)) done; true However, there can be also false positives , although we don’t check for them.

Lecture Notes

169

Improving Simple Hash-table with a Bloom filter Let us put Bloom filter to some good use by improving our simple implementation of a hash table. The way we implemented hash-tables before made them waster too much on iterating through the buckets before adding or getting an element. This is something that can be improved with a Bloom filter: indeed if we known that there is no element with a certain key in the bucket (the answer that Bloom filter can answer precisely), we don’t have to look for it. The price to pay for this speed-up is inability to remove elements from the hash-table (as one cannot remove elements from a Bloom filter). We start our hash-table from the following preamble. Its core data structure now gets enhanced with a Bloom filter: module BloomHashTable (K: BloomHashing) = struct type key = K.t (* Adding bloom filter *) module BF = BloomFilterImpl(K) type 'v hash_table = { buckets : 'v list array; capacity : int; filter : BF.t } (* Functions come here *) end For simplicity, upon creating a hash table, we make a Bloom filter with a fixed capacity: let mk_new_table cap = let buckets = Array.make cap [] in (* Pick reasonably large BF size *) let filter = BF.mk_bloom_filter 15000 in {buckets = buckets; capacity = cap; filter = filter} Insertion also updates the filter correspondingly: let insert ht k v = let hs = Hashtbl.hash k in let bnum = hs mod ht.capacity in let bucket = ht.buckets.(bnum) in let filter = ht.filter in

Lecture Notes

170 let clean_bucket = (* New stuff *) if BF.contains filter k (* Only filter if ostensibly contains key *) then List.filter (fun (k', _) -> k' k) bucket else bucket in (* Missed in the initial the implementation *) BF.insert filter k; ht.buckets.(bnum) Some v | _ -> None else None As announced before, removal is prohibited: let remove _ _ = raise (Failure "Removal is deprecated!")

Comparing performance Let us instantiate the Bloom filter-enhanced hash table: module BHT = BloomHashTable(IntStringHashing) module BHTTester = HashTableTester(BHT) Similarly to methods for testing performance of previously defined hash-tables, we implement the following function: let insert_and_get_bulk_bloom a m = Printf.printf "Creating Bloom hash table:\n"; let ht = time (BHTTester.mk_test_table_from_array_length a) m in Printf.printf "Fetching from Bloom hash table on the array of size %d:\n" (Array.length a); let _ = time BHTTester.test_table_get ht a in ()

171

Lecture Notes

Now, let us compare the Bloom filter-powered simple table versus vanilla simple hash-table: let compare_hashing_time_simple_bloom n m = let a = generate_key_value_array n in insert_and_get_bulk_simple a m; print_endline ""; insert_and_get_bulk_bloom a m Running the experiments, we observe not so much gain when a number of elements and the buckets are in the same ballpark: utop # compare_hashing_time_simple_bloom 10000 5000;; Creating simple hash table: Execution elapsed time: 0.003352 sec Fetching from simple hash table on the array of size 10000: Execution elapsed time: 0.000001 sec Creating Bloom hash table: Execution elapsed time: 0.007994 sec Fetching from Bloom hash table on the array of size 10000: Execution elapsed time: 0.000001 sec However, the difference is noticeable when the number of buckets is small, and the sie of the filter is still comparable with the number of elements being inserted: utop # compare_hashing_time_simple_bloom 15000 20;; Creating simple hash table: Execution elapsed time: 0.370876 sec Fetching from simple hash table on the array of size 15000: Execution elapsed time: 0.000002 sec Creating Bloom hash table: Execution elapsed time: 0.234405 sec Fetching from Bloom hash table on the array of size 15000: Execution elapsed time: 0.000000 sec

Week 08: Searching in Strings Substring Search • File: StringSearch.ml

Lecture Notes

172 Imagine that you are searching for a word on a web page or a Word document.

This problem is known and pattern search in a string. Despite being seemingly a simple challenge to solve, in order to do efficiently, it requires a lot of ingenuity. In this lecture we will see several solutions for it, of the increased implementation complexity, while reduced time demand.

Testing a search procedure The procedure search takes a string text and a patern pattern and returns a result of type int option

, where Some i denotes the first index in the string text , such that pattern starts

from it and is fully contained within text . If no such index exist (i.e., pattern is not in text ), search

returns None .

Even before we implement the search procedure itself, we develop a test for it. The first test function checkes if a pattern pattern is indeed in the string text , as reported by the function search

:

let test_pattern_in search text pattern = let index = get_exn @@ search text pattern in let p' = String.sub text index (String.length pattern) in assert (pattern = p') let test_pattern_not_in search text pattern = assert (search text pattern = None)

A very naive search Our first attempt to implement the search is as follows: let very_naive_search text pattern = let n = String.length text in let m = String.length pattern in if n < m then None else let k = ref 0 in let res = ref None in while !k test_pattern_in naive_search s p) ps; true

175

Lecture Notes

let%test "Naive Search True Negatives" = let (s, _, pn) = generate_string_and_patterns 500 5 in List.iter (fun p -> test_pattern_not_in naive_search s p) pn; true Finally, we can provide a higher-order testing procedure for strings, so it would test on a specific string, and on randomly-generated strings (for both positive and negative results), as follows: let search_tester search = let (s, ps, pn) = generate_string_and_patterns 500 5 in List.iter (fun p -> test_pattern_in search big p) patterns; List.iter (fun p -> test_pattern_in search s p) ps; List.iter (fun p -> test_pattern_not_in search s p) pn; true let%test _ = search_tester naive_search

Rabin-Karp Search • File: RabinKarp.ml The notion of hashing studied before for the implementations of hash-tables and Bloom filters, is also very useful for improving the efficiency of detecting patterns in strings. The key idea of the Rabin-Karp algorithm is to speed up the ordinary search by means of computing the rolling hash of the sub-string currently being checked, and comparing it to the hash of the of the pattern. First, define a special hash: let rk_hash text = let h = ref 0 in for i = 0 to String.length text - 1 do h := !h + Char.code text.[i] done; !h The search procedure now takes advantage of it: let rabin_karp_search text pattern = let n = String.length text in let m = String.length pattern in if n < m then None else

Lecture Notes

176 (* Compute as the sum of all characters in pattern *) let hpattern = rk_hash pattern in let rolling_hash = ref @@ rk_hash (String.sub text 0 m) in let i = ref 0 in let res = ref None in while !i test_pattern_not_in search s p)) pn First, let’s compare on random strings: let compare_string_search n m = let (s, ps, pn) = generate_string_and_patterns n m in evaluate_search naive_search "Naive" s ps pn; evaluate_search rabin_karp_search "Rabin-Karp" s ps pn That does not show so much difference: utop # compare_string_search 20000 50;; [Naive] Pattern in: Execution elapsed time: 0.999535 sec [Naive] Pattern not in: Execution elapsed time: 1.951543 sec [Rabin-Karp] Pattern in: Execution elapsed time: 1.112753 sec [Rabin-Karp] Pattern not in: Execution elapsed time: 2.155506 sec

178

Lecture Notes

In fact, Rabin-Karp is even a bit slower! The reason for this is that Rabin-Karp rolling has has too many collisions. In fact, we almost always have to compare strings in the same way as in the naive hash, but, in addition to that we also have to maintain the rolling hash value. Now, let us show when it shines. For this, let us create very repetitive strings: let repetitive_string n = let ast = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaa" in let pat1 = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaab" in let pat2 = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaac" in let mk n = let t = List.init n (fun x -> if x = n - 1 then pat1 else ast) in String.concat "" t in (mk n, [pat1], [pat2]) Now, let us re-design the experiment using the following function: let compare_string_search_repetitive n = let (s, ps, pn) = repetitive_string n in evaluate_search naive_search "Naive" s ps pn; evaluate_search rabin_karp_search "Rabin-Karp" s ps pn Once we run it: utop # compare_string_search_repetitive 50000;; [Naive] Pattern in: Execution elapsed time: 1.298623 sec [Naive] Pattern not in: Execution elapsed time: 1.305244 sec [Rabin-Karp] Pattern in: Execution elapsed time: 0.058651 sec [Rabin-Karp] Pattern not in: Execution elapsed time: 0.058463 sec - : unit = () The superiority of Rabin-Karp algorithm becomes obvious.

Knuth–Morris–Pratt Algorithm • File: KMP.ml This is the first algorithm for string matching in \(O(n)\) , where \(n\) is the size of the text where the search takes place). It has been independently invented by Donald Knuth and Vaughan Pratt, and James H. Morris, who published it together in a joint paper.

179

Lecture Notes

It is known as one of the most non-trivial basic algorithms, and is commonly just presented as-is, with explanations of its pragmatics and the complexity. In this chapter, we will take a look at a systematic derivation of the algorithm from a naive string search, eventually touching upon a very interesting (although somewhat non-obvious) idea — interrupted partial matches can be used to optimise the search in the rest of the text by “fast-forwarding” through several positions , without re-matching completely starting from the next character. A very nice video explaining KMP is available on YouTube. This chapter shows how to systematically derive KMP from the naive search. We will not cover this in the lecture, but you are encouraged to go through the steps below. The material of this chapter is based on this blog article , which, in its turn is based on this research paper by Prof. Olivier Danvy and his co-authors.

Revisiting the naive algorithm Let us start by re-implementing the naive research algorithm with a single loop that handles both indices k and j , so the former ranges over the text, and the latter goes over the pattern: let naive_search_one_loop text pattern = let n = length text in let m = length pattern in if n < m then None else let k = ref 0 in let j = ref 0 in let stop = ref false in let res = ref None in while !k Some x | _ -> None let search_rec = let rec search pattern m text n j k = if j = m then Found (k - j) else if k = n then Interrupted j else if pattern.[j] = text.[k] then search pattern m text n (j + 1) (k + 1) else search pattern m text n 0 (k - j + 1) in global_search search The signature of the inner search seems quite verbose, but it is important for the derivation, which is coming: the first three parameters are the pattern, its size m and the text; n stands for the size of the text, but it also limits the search range on the right (and will be a subject to manipulation in the future). Finally, j and k are the current (and also initial for the first run) values of the running indices within pattern and text , correspondingly. So far, we don’t make any interesting use of a interrupt index j in the case when the inner search

returns Interrupted j

Lecture Notes

181

Relating Matched Text and the pattern Let us notice the first interesting detail: at any moment, the positions of j and k relate the prefix of the pattern and a substring of text that fully match. This can be reinforced by the following invariants, instrumenting the search body: let search_inv = let rec search pattern m text n j k = assert (0 assert false

Lecture Notes

183 | Interrupted j -> j

let search_assert = let rec search pattern m text n j k = if j = m then Found (k - j) else if k = n then Interrupted j else if pattern.[j] = text.[k] then search pattern m text n (j + 1) (k + 1) else if j = 0 then search pattern m text n 0 (k + 1) else let j' = assertInterrupted @@ search pattern m text k 0 (k - j + 1) in search pattern m text n j' k in global_search search

Exploiting the Prefix Equality From

the

explanations

sub text (k - j) j

above,

recall

that

the

sub-strings

sub pattern 0 j

and

are equal. Therefore, the sub-call search pattern m text k 0 (k - j + 1)

searches for the pattern (or, rather, the interrupt index) within (a prefix of a suffix of) the pattern itself. Therefore, we can remove text from there, thus making this call work exclusively on a pattern: let search_via_pattern = let rec search pattern m text n j k = if j = m then Found (k - j) else if k = n then Interrupted j else if pattern.[j] = text.[k] then search pattern m text n (j + 1) (k + 1) else if j = 0 then search pattern m text n 0 (k + 1) else (* So we're looking in our own prefix *) let j' = assertInterrupted @@ search pattern m pattern j 0 1 in assert (j' < j); search pattern m text n j' k in global_search search

Lecture Notes

184

Tabulating the interrupt indices Since the information about interruptions and fast-forwarding can be calculating only using the pattern , without text involved, we might want to pre-compiled it and tabulate before running the search, obtaining a table : int array with this inforations. In other words the value j' = table.(j)

answers a question: how many positions j' of the pattern can I skip when

starting to look in a text, that begins with my pattern’s substring pattern[1 .. j] (i.e., precisely the value search pattern m pattern j 0 1 ). If we had a table like this, we could forumlate search as the following tail-recursive procedure: let rec loop table pattern m text n j k = if j = m then Found (k - j) else if k = n then Interrupted j else if pattern.[j] = text.[k] then loop table pattern m text n (j + 1) (k + 1) else if j = 0 then loop table pattern m text n 0 (k + 1) else loop table pattern m text n table.(j) k To populate such a table, however, we will need the search procedure itself. However, the size of the pattern m is typically much smaller than the size of the text, so creating this table pays off. Int the following implementation the inner procedure loop_search defines the standard search (as before) and uses to populate the table, which is the used for the main matching procedure: let search_with_inefficient_init = let loop_search pattern _ text n j k = let rec search pattern m text n j k = if j = m then Found (k - j) else if k = n then Interrupted j else if pattern.[j] = text.[k] then search pattern m text n (j + 1) (k + 1) else if j = 0 then search pattern m text n 0 (k + 1) else (* So we're looking in our own prefix *) let j' = assertInterrupted @@ search pattern m pattern j 0 1 in assert (j' < j); search pattern m text n j' k in

Lecture Notes

185

let m = length pattern in let table = Array.make m 0 in for j = 1 to m - 1 do table.(j) z | Some n -> f n In words map_option returns applies f to the value n within o , if o = Some n , or returns z otherwise.

Inserting an element into a BST The defined above mk_tree function creates an empty tree. Let us now implement a procedure for populating it with elements by inserting them one by one: let insert t e = let rec insert_element n e = let m = mk_node e in if e < n.value then match left n with | Some m -> insert_element m e | None -> m.parent := Some n; n.left := Some m; true else if e > n.value then match right n with | Some m -> insert_element m e | None -> m.parent := Some n; n.right := Some m; true else false in match !(t.root) with | None -> ( t.root := Some (mk_node e); t.size := 1; true) | Some n ->

Lecture Notes

221 if insert_element n e then (t.size := !(t.size) + 1; true) else false

Notice that the main working routine insert_element respects the BST property defined above: it positions the node m with the element e , so it would be in the correct subtree (smaller-left/ greater-right) with respect to its parent nodes. Finally, insert_element returns a boolean to indicate whether the element has been indeed added ( true ) or ignored as duplicated ( false ). In the former case the size of the tree is increased, in the latter it remains the same.

Binary-Search-Tree Invariant Let us now assert that tree-manipulating operations such as insert indeed preserve the discussed above BST property. For this, let us define the BST invariant in the form of the following function: let check_bst_inv t = let rec walk node p = (p node.value) && let res_left = match left node with | None -> true | Some l -> walk l (fun w -> p w && w true | Some r -> walk r (fun w -> p w && w >= node.value) in res_left && res_right in match !(t.root) with | None -> true | Some n -> walk n (fun _ -> true) The main recursive sub-function walk works by “growing” a predicate p that applies to each node further down the tree, making sure that it is correctly positioned with regard to all its parents. At the top level p is instantiated with (fun _ -> true) , as there are no restrictions imposed for the root of the tree, but more and more conjuncts added, as the checking proceeds recursively.

Testing Tree Operations Let us put or invariant to work by using it to test the correctness of insert . We do so by first defining a function for generating random trees from random arrays via insertion:

Lecture Notes

222

open BinarySearchTree let mk_tree_of_size n = let t = mk_tree () in let a = generate_key_value_array n in for i = 0 to n - 1 do insert t a.(i) done; t Next, we check that the generated trees indeed satisfy the BST property: let%test "Testing insertion" = let n = 1000 in let t = mk_tree_of_size n in check_bst_inv t

Printing a Tree It would be very nice if we could not only test but also visualise our binary search trees. Unfortunately, printing a tree in a standard top-down fashion requires quite a bit of book-keeping of tree-specific information. Printing a tree left-to-right is, however, can be done quite easily as follows: let print_tree pp snum t = let print_node_with_spaces l s = for i = 0 to s - 1 do Printf.printf " " done; print_endline (pp l.value); in let rec walk s node = match node with | None -> () | Some n -> begin walk (s + snum) (right n); print_node_with_spaces n s; walk (s + snum) (left n); end in map_option (get_root t) (fun n -> walk 0 (Some n)) () The first auxiliary function print_node_with_spaces prints a string of s spaces and the value of a node l .

223

Lecture Notes

The second function walk traverses the tree recursively, accumulating the “offset” proportionally to the depth of the tree node. It first prints the right sub-tree, then the node itself and then the left sub-tree, making use of the accumulated offset for printing the necessary number of spaces. Finally, it runs walk for the top-level root node, if it exists. Let us observe the effect of print_tree by instantiating it to print trees of key-value pairs: let print_kv_tree = print_tree (fun (k, v) -> Printf.sprintf "(%d, %s)" k v) 12 We can now use utop to experiment with it: utop # open BST;; utop # open BinarySearchTree;; utop # let t = mk_tree ();; val t : '_weak1 tree = {root = {contents = None}} utop # let a = [|(4, "ayuys"); (7, "cdrhf"); (4, "ukobi"); (5, "hwsjs"); (8, "uyrla"); (0, "uldju"); (3, "rkolw"); (7, "gnzzo"); (7, "nksfe"); (4, "geevu")|] utop # for i = 0 to 9 do insert t a.(i) done;; - : unit = () utop # print_kv_tree t;; (8, uyrla) (7, nksfe) (7, gnzzo) (7, cdrhf) (5, hwsjs) (4, ukobi) (4, geevu) (4, ayuys) (3, rkolw) (0, uldju) - : unit = () That is, on can see that (4, "ayuys") is the root of the tree, and the whole structure satisfies the BST property.

Searching Elements We define the search function so it would return not just the element, but also the node that contains it. It does so by recursively traversing the tree, while relying on its BST property: let search t k = let rec walk k n =

Lecture Notes

224 let nk = n.value in if k = nk then Some n else if k < nk then match left n with | None -> None | Some l -> walk k l else match right n with | None -> None | Some r -> walk k r in map_option (get_root t) (walk k) None

In the absence of the abstract module signature, it is quite dangerous to return a node (not just its value ), as one can break the BST properties, by messing with its mutable components (e.g., reference to left/right children). However, returning a node also simplifies the implementation of various testing and manipulation procedures, specifically, deletion of tree nodes.

Tree Traversals There are multiple ways to flatten a tree into a list, which can be convenient for the sake of testing and other inspections. The simplest way to do it is via an accumulator (implemented as a mutable queue) and a procedure, known as Depth-First-Search (DFS), which traverses the tree recursively, following its order (sometimes, this travelsal is also called in-order traversal): open Queues open DLLBasedQueue let depth_first_search_rec t = let rec walk q n = (match left n with | Some l -> walk q l | None -> ()); enqueue q n.value; (match right n with | Some r -> walk q r | None -> ()); in let acc = (mk_queue 0) in map_option (get_root t) (walk acc) (); queue_to_list acc With the call stack, DFS traverses the tree in a Last-In-First-Out mode (LIFO). By replacing the implicit stack with an explicit mutable queue (First-In-First-Out, FIFO), we can obtain an alternative traversal, known as Breadth-First-Search (BFS), so it would accumulate tree elements by following its “layers”:

Lecture Notes

225

let breadth_first_search_loop t = let loop wlist q depth = while not (is_empty wlist) do let n = get_exn @@ dequeue wlist in (match left n with | Some l -> enqueue wlist l | _ -> ()); enqueue q n.value; (match right n with | Some r -> enqueue wlist r | _ -> ()); done in let acc = (mk_queue 0) in let wlist = mk_queue 0 in (match get_root t with | None -> () | Some n -> begin enqueue wlist n; loop wlist acc 0; end); queue_to_list acc We can also define all elements of the set in terms of the traversal: let elements t = breadth_first_search_loop t

Testing Element Retrieval and Tree Traversals As we know well how to work with lists, we can use traversals to test each other, as well as the search function: (******************************************) (* Testing traversals *) (******************************************) let check_elem_in_tree t e = let n = search t e in (get_exn @@ n).value = e let%test "Testing DFS" = let n = 1000 in let t = mk_tree_of_size n in let l1 = depth_first_search_rec t in List.length l1 = n && List.for_all (fun e -> check_elem_in_tree t e) l1 &&

Lecture Notes

226 sorted l1 let%test "Testing BFS" = let n = 1000 in let t = mk_tree_of_size n in let l1 = depth_first_search_rec t in let l2 = breadth_first_search_loop t in List.length l1 = n && List.for_all (fun e -> List.mem e l2) l1 && List.for_all (fun e -> List.mem e l1) l2 (******************************************) (* Testing retrieval *) (******************************************) let%test "Testing retrieval" = let n = 1000 in let t = mk_tree_of_size n in let m = Random.int n in let l = breadth_first_search_loop t in let e = List.nth l m in let z = search t e in z None

More BST operations Thanks to its invariant, a BST makes it almost trivial to implement operations, such as • Getting minimum/maximum element in a set representing by a tree • Find a successor/predecessor of an element For instance, finding the minimal element of a subtree starting from a node n can be achieved by the following operation: let rec find_min_node n = match left n with | Some m -> find_min_node m | None -> n Notice that this operation does not find the global tree-wise successor of the element in node n , although that is also possible to do in \(O(\log n)\) operations for a tree that is well-balanced (i.e., not to “tall” and “thin”).

227

Lecture Notes

Deleting a node from BST Deletion of a node from a BST is the most complicated operation, as it requires significant restructuring of the tree in order to maintain its invariant. Deletion of a non-leaf node from a tree will require some other node (along with its subtree) to take its place. This can be achieved by the following operation for performing “transplantation” of one node by another: (* Replacing node U by (optional) node V in T. *) let transplant t u v = (match parent u with | None -> t.root := v | Some p -> match left p with | Some l when u == l -> p.left := v | _ -> p.right := v); (* Update parent of v *) match v with | Some n -> n.parent := parent u | _ -> () Notice the comparison via when u == l in the implementation above. This is essential: node references must be compared using OCaml’s “shallow” equality mechanism, as structural “deep” equality on references ( = ) in the case of linked data structures, such as BSTs, may lead to errors that are very difficult to debug. Let us now discuss possible scenarios for removing a node z from the tree T by preserving the BST property. 1. The simplest case is when z is a leaf, so we can simply remove it. 2. The node z has no left child. In this case, we can simply replace it by its right child (argue, why this is correct) as on the picture below:

Lecture Notes

228

1. A similar situation takes place when z has only the left child, which replaces it (via transplant

):

229

Lecture Notes

1. In the case when z has two children, we need to look up for the node that corresponds to its successor in z -rooted subtree wrt. the ordering of elements. In this particular case, such a successor, y , is the immediate right child of z that has no left child itself (convince yourself that in this case y is indeed a successor of z ), therefore we can transplate y to replace z :

230

Lecture Notes

1. Finally, in the most nasty case, y , the successor of z (in its subtree), is buried deep below z , and potentially hasa right child (but no left child, otherwise it wouldn’t be the successor of z ) . In this case we need to make to perform the transformation as follows:

Lecture Notes

231

Specifically, in the last case we first transplant y with its right child x and then make r , the former right child of z to be the right child of y . After that we simply transplant y to the place of z . The full code of deletion is as follows: let delete_node t z = t.size := !(t.size) - 1; if left z = None then transplant t z (right z) else if right z = None then transplant t z (left z) else (* Finding the successor of `z` *) let z_right_child = (get_exn @@ right z) in let y = find_min_node z_right_child in (* Fact: `y` has no left child *) (if parent y None && z != get_exn @@ parent y then (* If y is not immediately under z, replace y by its right subtree *) let x = right y in (transplant t y x; y.right := right z; (get_exn @@ right y).parent := Some y)); (* Now `y` replaces `z` at its position *) transplant t z (Some y); y.left := !(z.left); (get_exn @@ left y).parent := Some y How would we test deletion? We can do so by generating a random BST, choosing a random node in it z , and then checking the following properties for the modified tree after the deletion of z : • The tree still satisfies the BST invariant; • It has the previous number of elements minus one; • All elements from the modified tree plus the deleted one are the elements of the old tree. These checks can be automatically performed by the following function, parameterised by the size of the tree: let test_delete n = let t = mk_tree_of_size n in let m = Random.int n in let l = breadth_first_search_loop t in

Lecture Notes

232 let e = List.nth l m in let z = get_exn @@ search t e in delete_node t z; (* Checkign the tree invariant *) assert (check_bst_inv t); (* Checkign the tree size *) let ld = breadth_first_search_loop t in assert (List.length ld = n - 1); (* Checking integrity *) assert (List.for_all (fun x -> List.mem x ld || x == e) l)

BST Rotations In a BST, left and right rotations exchange the node with its right/left child (if present), correspondingly. Diagrammatically, this can be represented by the following picture:

That is, via left rotation, \(y\) becomes a parent of \(x\) and vice versa. The implementation of left rotation of a node \(x\) in a tree \(T\) is given below: let left_rotate t x = match right x with | None -> () | Some y -> (* turn y's left subtree into x's right subtree *) x.right := left y;

Lecture Notes

233 (if left y None then (get_exn @@ left y).parent := Some x); (* link x's parent to y *) y.parent := parent x; (match parent x with | None -> t.root := Some y | Some p -> match left p with | Some l when x == l -> p.left := Some y | _ -> p.right := Some y); (* Make x the left child of y *) y.left := Some x; x.parent := Some y

When a subtree is rotated, the subtree side upon which it is rotated increases its height by one node while the other subtree decreases its height. This makes tree rotations useful for rebalancing a tree when it becomes “degenerate” (tall and thin). This makes it possible to keep the worst-case complexity of tree operations within \(O(\log n)\) , without it degenerating to \ (O(n)\) . Implementation of the right BST rotation and rotation testing of the rotations are left as an exercise.

Week 12: Graph Algorithms Representing Graphs • File Graphs.ml Graphs are an important versatile mathematical abstraction, which is used to represent the relations between multiple objects. Such (possibly non-symmetric) relations can be frequently phrased in terms of connectivity and reachability , as we’ve seen before in the chapter on Equivalence Classes and Union-Find . A graph is commonly represented in mathematics by a pair \(G = (V, E)\) , where \(V\) is a set of the graphs’s vertices (nodes), represented the related elements, and \(E\) is a set of its edges (arcs) such that \(E \subseteq V \times V\) .

Lecture Notes

234 As some graph examples, \(V\) and \(E\) can represent correspondingly: • Cities and roads between them • Routers in the networks and connections between them • Statements in a program and a control-flow transitions • Control states of a machine and transitions between them • “Friendship” relations between users of a social network

It is common to think of \(V\) to be represented by a segment \(\{0 ... (n - 1)\}\) of natural numbers for some \(n\) (so that \(n\) is the size of the set of vertices). However, if the nodes carry additional meaning (e.g., the name of the city), one can define their payload as a function \(\{0 ... (n - 1)\} \rightarrow P\) for some set of payload values \(P\) . Edges can be also given labels in a similar way by defining a function \(E \rightarrow L\) for some label set \(L\) .

Graphs as Adjacency Lists Here is the first take on representing graphs as data s structure – by means of adjacency lists (AL). In this representation an each node (a natural number) u points (e.g., as an element of an array) to a list of other nodes v , that are immediately reachable from u , i.e., the graph g has an edge (u, v)

.

For instance, consider the following graph:

235

Lecture Notes

It has 6 nodes (numbered 0-5) and can be encoded via the following array: [|[1; 3]; [4; 2]; [4; 5]; [1]; []; [1; 5]|] That is, for instance, the node 5 has nodes 1 and also itself as its next (successors) nodes, immediately reachable via the corresponding edges.

Lecture Notes

236

Keeping in mind the possibility of adding payload to nodes and labels to edges, we arrange the graph as the following type graph : module AdjacencyGraphs = struct type ('a, 'b) graph = { size : int; adj : int list array; node_payloads : (int * 'a) list ref; edge_labels : ((int * int) * 'b) list ref } let mk_graph n = { size = n; adj = Array.make n []; node_payloads = ref []; edge_labels = ref []; } (* More functions come here *) end Creating a graph allocates n nodes, but does not add any edges. As graphs are an inherently imperative (i.e., mutable) structure, we can add edges as follows by changing the corresponding components of the adjacency array: let add_edge g src dst = assert (in_range g src && in_range g dst); let out = g.adj.(src) in let out' = List.filter (fun n -> n dst) out in g.adj.(src) n dst) out in g.adj.(src) (s, d) (src, dst))

237

Lecture Notes

labs in g.edge_labels := ((src, dst), l) :: labs' It is not uncommon to need to have the whole set of edges. We can obtain it as follows, by traversing the entire adjacency array, returning the list of edges: open Queues let edges g = let open DLLBasedQueue in let q = mk_queue g.size in for i = 0 to g.size - 1 do let next = g.adj.(i) in List.iter (fun n -> enqueue q (i, n)) next done; queue_to_list q

Reading and Printing Graphs Let us now suggest a way to input graphs, so they would be converted to the programmatic representation. One way to do so is to provide a size of a graph (in terms of nodes), as well as all pairs, indicating the directed edges. For instance, the graph above can be defined by the following list of strings, where the first one is its size: let small_graph_shape = ["6"; "0 1"; "0 3"; "3 1"; "1 4"; "1 2"; "2 4"; "2 5"; "5 1"; "5 5"] Using the functions from the previous weeks, we can convert this list to a graph, in which node payloads are the same as node identifiers (i.e., natural numbers) using the following function: let adjacency_int_graph_of_strings ls = let size = trimmer (List.hd ls) |> int_of_string in let g = mk_graph size in let edges = List.tl ls in let pairs = List.map (fun s -> trimmer s) edges |> List.filter (fun s -> String.length s > 0) |> List.map (fun s ->

Lecture Notes

238 let splitted = splitter s in let src = int_of_string @@ List.hd splitted in let dst = int_of_string @@ List.hd @@ List.tl splitted in (src, dst)) in for i = 0 to g.size - 1 do set_payload g i i done; List.iter (fun (s, d) -> add_edge g s d) pairs; g

In the same way, we can read a graph from the file (hence the string-based representation): let read_simple_graph_shape_from_file filename = let ls = read_file_to_strings filename in adjacency_int_graph_of_strings ls Finally, we can dump a simple graph with no payloads into a file using the following pair of functions: let graph_to_string g = let s0 = string_of_int g.size in let ls = List.map (fun (s, d) -> Printf.sprintf "%d %d" s d) (edges g) in String.concat "\n" (s0 :: ls) (* Dump graph to file *) let write_simple_graph_shape_to_file filename g = graph_to_string g |> write_string_to_file filename Question: How would you suggest to serialize graphs with non-trivial payloads on nodes and labels on edges?

Rendering Graphs via GraphViz The simplest way to visualise graphs in a nice form is to use a third-party tool GraphViz . As input, GraphViz accepts a text file in a special format, which it can then convert to an image of a graph, taking care of positioning the nodes and rendering the edges between them. Some examples ony using GraphViz can be found by this link . The following functions transform a graph, represented by adjacency lists to a GraphViz-formatted string and write it to the file:

Lecture Notes

239

let graphviz_string_of_graph gtyp conn vattrib eattrib g = let preamble = gtyp ^ " {\n" in let epilogue = "}" in let body = AdjacencyGraphs.edges g |> List.map (fun (s, d) -> Printf.sprintf "%s %s %s %s" (vattrib s) conn (vattrib d) (eattrib (s, d))) |> String.concat ";\n" in preamble ^ body ^ epilogue let graphviz_no_payload g out = let s = graphviz_string_of_graph "digraph" " -> " string_of_int (fun _ -> "") g in write_string_to_file out s The function graphviz_string_of_graph takes many arguments: * ``gtyp`` is the type of the graph to be rendered (directed/ undirected); * ``conn`` is a connective determining the shape of edges; * ``vattrib`` is a function to render nodes; * ``eattrib`` is a function to render edges; * ``g`` is a graph itself in an adjacency-list representation When run graphviz_no_payload g "graph.dot" produces a file named graph.dot , which can be then rendered from the console via GraphViz-provided utility dot as follows: dot -Tpdf filename.dot -o outfile.pdf Here filename.dot can be any GraphViz-formatted file (can be also named differently), and outfile.pdf

is the resulting PDF file with the graph.

The image above has been obtained via GraphViz for the graph, read from small_graph_shape .

Shortcomings of Adjacency-List graph representation The main disadvantage of adjacency-list based representation is that many common operations, such as getting predecessors of a node, in it are very expensive and can take \(O(|E|)\) time. It is also very difficult to add new nodes, as it would require allocating a new array.

Lecture Notes

240

Graphs as Linked Data Structures • File Graphs.ml Let us consider a more efficient implementation of graphs as a linked heap-based data structure. The implementation features some redundancy, in order to provide an efficient access to the nodes of a graph as well as their adjacent neighbours. The implementation will rely on data structures developed previously: hash-tables and sets, represented via BSTs. We start by defining the data type for nodes: module LinkedGraphs = struct (*************************************************) (* Nodes *) (*************************************************) type 'a node id : int; value : 'a next : int prev : int }

= { ref; list ref; list ref

let get_value n = !(n.value) let get_next n = !(n.next) let get_prev n = !(n.prev) let add_prev node src = let prev' = get_prev node |> List.filter (fun n -> n src) in node.prev := src :: prev' let add_next node dst = let next' = get_next node |> List.filter (fun n -> n dst) in node.next := dst :: next' (* More types and functions are coming here *) end Each node stores its identifier (an integer), a payload value , as well as lists of “previous” and “next” nodes in the graph (initially empty). We now define a graph as follows:

Lecture Notes

241

(*************************************************) (* Auxiliary definitions *) (*************************************************) open BST open BetterHashTable module Set = BinarySearchTree module NodeTable = ResizableListBasedHashTable(struct type t = int end) module EdgeTable = ResizableListBasedHashTable(struct type t = int * int end) type 'a set = 'a Set.tree (*************************************************) (* Working with Graphs *) (*************************************************) type ('a, 'b) graph = { next_node_id : int ref; nodes : int set; node_map : (int * 'a node) NodeTable.hash_table; edges : (int * int) set; edge_labels : ((int * int) * 'b) EdgeTable.hash_table } That is, a graph contains: • a counter next_node_id used to allocate identifiers for newly added nodes; • a set (represented via BST) nodes of all node identifies; • node_map for mapping node identifiers to node objects; • a set of edges ( edges ); • a hash map of edge labels ( edge_labels ). The graph structure defined just above allows to access the set of predecessors/successors of a node in a constant time, as opposed to linear one with the list-based representation. Consider the following utility functions: (* Graph size *) let v_size g = !(g.next_node_id) let e_size g = BinarySearchTree.get_size g.edges let get_nodes g = Set.elements g.nodes

242

Lecture Notes

(* Refer to the node in the graph *) let get_node g i = get_exn @@ NodeTable.get g.node_map i let get_succ g n = let node = get_node g n in get_next node let get_prev g n = let node = get_node g n in get_prev node let node_in_graph g n = let nodes = g.nodes in Set.search nodes n None let edge_in_graph g src dst = let nodes = g.edges in Set.search nodes (src, dst) None As the linked graph structure combines five conceptually “overlapping” components, it needs to be maintained with a lot of care, in order not to introduce any discrepancies in the representations. Creating new empty graph is easy: let mk_graph _ = { next_node_id = ref 0; nodes = Set.mk_tree (); node_map = NodeTable.mk_new_table 10; edges = Set.mk_tree (); edge_labels = EdgeTable.mk_new_table 10 } Adding a node requires allocating a new identifier for it, registering it in both the set of node identifiers, and the node map: let add_node g v = let new_id = !(g.next_node_id) in g.next_node_id := !(g.next_node_id) + 1; let node = { id = new_id; value = ref v; next = ref []; prev = ref []; } in (* Register node *) let _ = Set.insert g.nodes new_id in

Lecture Notes

243 (* Register node payload *) NodeTable.insert g.node_map new_id node

Adding an edge requires modifying the corresponding node instances to account for new predecessors and successors: let add_edge g src dst = assert (node_in_graph g src && node_in_graph g src); (* Register edge *) let _ = Set.insert g.edges (src, dst) in (* Add information to individual nodes *) let src_node = get_exn @@ NodeTable.get g.node_map src in let dst_node = get_exn @@ NodeTable.get g.node_map dst in add_prev dst_node src; add_next src_node dst We can also set a new label to an edge (src, dst) as follows: let set_edge_label g src dst l = assert (node_in_graph g src && node_in_graph g src); assert (edge_in_graph g src dst); (* Register label *) EdgeTable.insert g.edge_labels (src, dst) l

Switching between graph representations As we already have reading/writing implemented for AL-based graphs, let us implement conversion between them and linked representations. The following function, for instance, converts a simple AL-based graph (with arbitrary node payloads) to a linked representation: let from_simple_adjacency_graph (ag : ('a, 'b) AdjacencyGraphs.graph) = let g = mk_graph () in (* Add nodes *) for i = 0 to ag.size - 1 do let v = snd @@ List.find (fun (n, _) -> n = i) ! (ag.node_payloads) in add_node g v; done; (* Add edges *) for i = 0 to ag.size - 1 do ag.adj.(i) |> List.map (fun n -> (i, n)) |> List.iter (fun (src, dst) -> add_edge g src dst)

Lecture Notes

244 done; (* Add edge labels *) List.iter (fun ((src, dst), l) -> set_edge_label g src dst l) !(ag.edge_labels); g

Conversely, the following function obtains an adjacency graph from a linked representation: let to_adjacency_graph g = let size = v_size g in let ag = AdjacencyGraphs.mk_graph size in (* Set node payloads *) Set.elements g.nodes |> List.iter (fun n -> let node = get_exn @@ NodeTable.get g.node_map n in AdjacencyGraphs.set_payload ag n (get_value node)); (* Add edges *) let edges = Set.elements g.edges in List.iter (fun (src, dst) -> AdjacencyGraphs.add_edge ag src dst) edges; (* Add edges labels *) List.iter (fun (s, d) -> match EdgeTable.get g.edge_labels (s, d) with | None -> () | Some l -> AdjacencyGraphs.set_edge_label ag s d l) edges; ag We can now put those functions to use for getting linked graphs immediate from the strings and files: let parse_linked_int_graph ls = AdjacencyGraphs.adjacency_int_graph_of_strings ls |> from_simple_adjacency_graph let read_simple_linked_graph_from_file filename = let ag = AdjacencyGraphs.read_simple_graph_shape_from_file filename in from_simple_adjacency_graph ag

245

Lecture Notes

Testing graph operations One advantage of AL-based representation is that it makes it considerably easier to test graphs for certain properties. For instance, the following function checks that two AL-represented graphs have the same topology assuming the exact correspondence of the nodes (i.e., the same sets of node identifiers, and edges between them): let same_shape (ag1 : ('a, 'b) AdjacencyGraphs.graph) (ag2 : ('a, 'b) AdjacencyGraphs.graph) = assert (ag1.size = ag2.size); let n = ag1.size in let comp x y = if x < y then - 1 else if x > y then 1 else 0 in for i = 0 to n - 1 do let adj1 = ag1.adj.(i) |> List.sort comp in let adj2 = ag1.adj.(i) |> List.sort comp in assert (adj1 = adj2) done; true We can use it to check that out AL-to-linked-and-back conversion preserves the graph shape. Take the following graph: let medium_graph_shape = ["13"; "0 1"; "0 6"; "0 5"; "2 0"; "2 3"; "3 5"; "5 4"; "6 4"; "7 6"; "8 7"; "6 9"; "9 10"; "9 11"; "9 12"; "11 12"] We can now make sure that the following test succeeds: let%test _ = let ag = AdjacencyGraphs.adjacency_int_graph_of_strings medium_graph_shape in

Lecture Notes

246 let g = LinkedGraphs.from_simple_adjacency_graph ag in let ag' = LinkedGraphs.to_adjacency_graph g in same_shape ag ag'

We can also try out the conversion machinery for the sake of producing nice GraphViz images: utop # let g = LinkedGraphs.parse_linked_int_graph medium_graph_shape;; utop # let ag = LinkedGraphs.to_adjacency_graph g;; utop # graphviz_no_payload ag "medium.dot";; Now, by running from the terminal: dot -Tpdf medium.dot -o medium.pdf we obtain the following image:

247

Reachability and Graph Traversals • File: Reachability.ml

Lecture Notes

Lecture Notes

248

Having the graphs defined, let us now do something interesting with them. In this chapter, we will be looking at the questions of reachability between nodes, as allowed by a given graph’s topology. In all algorithms, we will be relying on the linked representation: open Util open ReadingFiles include Graphs open LinkedGraphs

Checking Reachability in a Graph Given a graph g and two its nodes init and final , let us define a procedure that determines whether we can get from init to final by following the edges of g , and if so, return the list of those edges: let reachable g init final = let rec walk path visited n = if n = final then Some path else if List.mem n visited then None else (* Try successors *) let node = get_node g n in let successors = get_next node in let visited' = n :: visited in let rec iter = function | [] -> None | h :: t -> let path' = (n, h) :: path in match walk path' visited' h with | Some p -> Some p | None -> iter t in iter successors in match walk [] [] init with | Some p -> Some (List.rev p) | _ -> None The implementation of

reachable

employs the backtracking technique (see the Chapter

Constraint Solving via Backtracking ), which is implemented by means of an interplay of the two functions: walk and iter . The former also checks that we do not hit a cycle in a graph, hence it contains the list of visited nodes. Finally, the path accumulates the edges (in a reversed) on the way to destination, and is returned at the end, if the path is found.

Lecture Notes

249

Question: What is the complexity of reachable in terms of sizes of g.V and g.E . What would it be if we don’t take the complexity of List.mem n visited into the account? We can define the reachability predicate as follows: let is_reachable g init final = reachable g init final None

Testing Reachability The following are the tests for the specific two graphs we have seen, designed with a human intuition in mind: open Reachability let%test _ = let g = LinkedGraphs.parse_linked_int_graph small_graph_shape in (* True statements *) assert (is_reachable g 0 5); assert (is_reachable g 5 1); assert (is_reachable g 5 5); (* False statements *) assert (not (is_reachable g 4 5)); true let%test _ = let g = LinkedGraphs.parse_linked_int_graph medium_graph_shape in (* True statements *) assert (is_reachable g 2 4); assert (is_reachable g 8 12); assert (is_reachable g 0 10); (* False statements *) assert (not (is_reachable g 5 9)); assert (not (is_reachable g 11 7)); true

Rendering Paths in a Graph We can use the same machinery for interactive with GraphViz to highlight the reachable paths in a graph: let bold_edge = "[color=red,penwidth=3.0]" let graphviz_with_path g init final out =

Lecture Notes

250 let let | |

r = reachable g init final in attrib (s, d) = match r with None -> "" Some p -> if List.mem (s, d) p then bold_edge else ""

in let ag = LinkedGraphs.to_adjacency_graph g in let s = graphviz_string_of_graph "digraph" " -> " string_of_int attrib ag in write_string_to_file out s For instance, taking the g to be the medium-size graph from the end of the previous chapter, we can render the result of graphviz_with_path g 2 12 "filename.out" to the following picture:

251

Lecture Notes

Depth-First Traversal It is possible to split graph into a set of trees with dedicated roots, so that each subtree is reachable from its root. One way to do it is using the Depth-First Search (DFS) procedure. The procedure is similar to reachability checking implemented above, but employs a more efficient way to detect cycles via the “colouring” technique. In essence, it maintains an additional

Lecture Notes

252

hash table, assigning the colors as attributes to the nodes, to indicate whether the have not yet, are being, or have been fully processed: open NodeTable type color = White | Gray | Black The main procedure is again implemented via back-tracking: let rec dfs g = let color_map = mk_new_table (v_size g) in let tree_map = mk_new_table (v_size g) in let time_map = mk_new_table (v_size g) in let has_cycles = ref false in let roots = ref [] in let all_nodes = get_nodes g in (* Make all nodes white *) List.iter (fun n -> insert color_map n White) all_nodes; (* Insert all nodes to the tree *) List.iter (fun n -> insert tree_map n []) all_nodes; let time = ref 0 in let rec dfs_visit u = time := !time + 1; let u_in = !time in insert color_map u Gray; get_succ g u |> List.iter (fun v -> let v_color = get_exn @@ get color_map v in if v_color = White then begin let siblings = get_exn @@ get tree_map u in insert tree_map u (v :: siblings); dfs_visit v end else if v_color = Gray then has_cycles := true) ; insert color_map u Black; time := !time + 1; let u_out = !time in insert time_map u (u_in, u_out) in List.iter (fun n -> if get_exn @@ get color_map n = White then begin (* Record roots *) roots := n :: !roots;

Lecture Notes

253 dfs_visit n end) all_nodes; (!roots, tree_map, time_map, !has_cycles)

It starts by assigning all nodes the White colour, and then creates an empty tree for each node. It also keeps track of time (a natural number) of “entering” and “exiting” the node. The “roots” of the trees are all collected in the mutable list roots , and the variable has_cycles determines whether a cycle has been witnessed. As the result, the procedure returns the list of roots, the hash-map that stores the tree relation between nodes in the DFS traversal from the roots, the pair of timestamps when a node has been visited and the boolean value indicating whether a graph has cycles. Question: How would you characterise the period during which a node is painted Gray during the DFS traversal? Question: If u is a parent of v in a DFS-tree, what is the relation between their timestamps? We can render the result of DFS via the following procedure, using the tree to retrieve the edge attributes: (* Visualise with DFS *) let graphviz_with_dfs g out = let (_, tree, _, _) = dfs g in let eattrib (s, d) = match get tree s with | None -> "" | Some p -> if List.mem d p then bold_edge else "" in let ag = LinkedGraphs.to_adjacency_graph g in let s = graphviz_string_of_graph "digraph" " -> " string_of_int eattrib ag in write_string_to_file out s For instance, for our working graph we get the following image, indicating four trees, rooted at nodes 0, 2, 7, and 8, correspondingly (the last two trees only have one node each, hence are difficult to spot):

254

Lecture Notes

The reason why we ended up with four trees is due to the order in which DFS was choosing nodes to start from.

DFS and Reachability Let us define the following procedure, checking the reachability via DFS:

Lecture Notes

255

let is_reachable_via_dfs g init final = let (roots, tree, _, _) = dfs g in let rec walk n = if n = final then true else get tree n |> get_exn |> List.exists (fun v -> walk v) in if List.mem init roots then walk init else false Question: Is initial notion of reachability equivalent to DFS-reachability? The differences aside, we can still use it to teste DFS using the following observations: let test_dfs g = let all_nodes = LinkedGraphs.get_nodes g in let (dfs_roots, _, _, _) = GraphDFS.dfs g in (* Any node DFS-reachable from a root r is reachable from r *) let fact1 = List.for_all (fun u -> List.for_all (fun v -> if GraphDFS.is_reachable_via_dfs g u v then is_reachable g u v else true) all_nodes) dfs_roots in (* Any node is reachable from some root r *) let fact2 = List.for_all (fun u -> List.exists (fun r -> GraphDFS.is_reachable_via_dfs g r u) dfs_roots) all_nodes in fact1 && fact2

DFS and Cycle Detection As a byproduct, our DFS has detected if a given graph has a cycle in it. We can now test it as follows: let%test _ = let g = LinkedGraphs.parse_linked_int_graph small_graph_shape in

Lecture Notes

256 let (_, _, _, c) = GraphDFS.dfs g in c

let%test _ = let g = LinkedGraphs.parse_linked_int_graph medium_graph_shape in let (_, _, _, c) = GraphDFS.dfs g in not c

Topological Sort Assume our graph has no cycles (i.e., it is a so-called Directed Acyclic Graph , or DAG ). In this case it is possible to enumerate its nodes (i.e., put them to an ordered list) in a way that all edges will be going from nodes “left-to-right”. This operation is called Topological Sort and is very useful for processing dependencies in an order, implicitly imposed by a graph. As an example of Topological Sort, you can think of compiling multiple OCaml files. Dependencies between files introduce a DAG (as there are no cycles), but the compiler need to process them in an order so that the dependant files would be compiled after their dependencies. This is where Topological Sort comes to the rescue. Another (somewhat more lively) example is a professor who dresses every morning, having the following dependencies between his clothes to put on:

257

The graph with those dependencies can be encoded as follows: let clothes_edges = [ (0, 8); (0, 2); (8, 2); (8, 1); (8, 7); (3, 7); (3, 4); (4, 5); (7, 5); (6, 2); ] while the payloads (i.e., the items of clothes) are given by the following array: let clothes = [| "underpants"; "phone";

Lecture Notes

Lecture Notes

258 "shoes"; "shirt"; "tie"; "jacket"; "socks"; "belt"; "trousers"; |] We can now instantiate the linked-structure-based graph via the following function: let read_graph_and_payloads size nvalue elist elabels = let open AdjacencyGraphs in let g = mk_graph size in for i = 0 to g.size - 1 do set_payload g i nvalue.(i) done; List.iter (fun (s, d) -> add_edge g s d) elist; List.iter (fun (s, d, l) -> set_edge_label g s d l) elabels; LinkedGraphs.from_simple_adjacency_graph g

let clothes_graph = read_graph_and_payloads 9 clothes clothes_edges ([] : (int * int * unit) list) The image can produced by the following procedure: let graphviz_with_payload g values out = let eattrib e = "" in let vattrib n = values.(n) in let ag = LinkedGraphs.to_adjacency_graph g in let s = graphviz_string_of_graph "digraph" " -> " vattrib eattrib ag in write_string_to_file out s The procedure of the topological sort exploits the time-stamps recorded during DFS. The intuition is as follows: in the absence of cycles, the nodes with the later “exit” timestamp u_out are the “topological predecessors” of those with smaller timestamps, and, hence, the former should be put earlier in the list. Another way to think of it is that DFS introduces a “parenthesised structure” on the subtrees of the graph, and the nodes up the tree have exit timestamps, corresponding to a parenthesis more “to the right”. The implementation of the topological sort, thus, simply sorts the nodes in the decreasing order of the exit timestamp:

Lecture Notes

259

module TopologicalSort = struct open NodeTable let get_last_time m n = get_exn @@ get m n let topo_sort g = let (_, _, time_map, _) = GraphDFS.dfs g in get_nodes g |> List.sort (fun n1 n2 -> let (_, t1) = get_last_time time_map n1 in let (_, t2) = get_last_time time_map n2 in if t1 < t2 then 1 else if t1 > t2 then -1 else 0) end For the graph of professor clothes, the topological sort returns the following sequence (which is coherent with the picture above): utop # let l = TopologicalSort.topo_sort clothes_graph;; utop # List.iter (fun i -> Printf.printf "%s\n" clothes.(i)) l;; socks shirt tie underpants trousers belt jacket phone shoes

Testing Topological Sort A simple property to check of a topological sort is that for all subsequently positioned nodes (u, v) in its result, the node u is not reachable from v : let | | |

rec all_pairs ls = match ls with [] -> [] _ :: [] -> [] h1 :: h2 :: t -> (h1, h2) :: (all_pairs (h2 :: t))

let%test _ = let g = LinkedGraphs.parse_linked_int_graph medium_graph_shape in let pairs = TopologicalSort.topo_sort g |> all_pairs in

Lecture Notes

260 List.for_all (fun (s, d) -> not (is_reachable g d s)) pairs let%test _ = let g = clothes_graph in let pairs = TopologicalSort.topo_sort g |> all_pairs in List.for_all (fun (s, d) -> not (is_reachable g d s)) pairs

Single-Source Shortest Paths • File: Paths.ml One of the most common problems solved via graphs is navigation — finding the most efficient route from a point A to a point B, following the map, defined by the graph (this is what is happening when you are looking for directions on Google maps).

Weighted Graphs To represent the navigation problem using graphs, we need to expand our definitions.  Definition (Weighted directed graph) A directed graph \(G = (V, E)\) is called weighted if it comes with a function \(w : E \rightarrow \mathbb{R}\) , which maps each edge to some number, representing its “weight” (or “cost”).

 Definition (A path in a graph) A sequence of nodes \(p = \langle v_0, \ldots, v_k \rangle\) is a path in a graph \(G\) , if for any \(i\) and \(i + 1\) , such that \(v_i\) and \(v_{i + 1}\) are the two nodes in \(p\) , \((v_i, v_{i + 1}) \in G.E\) . A weight of the path \(p\) is defined as \(w(p) = \sum_{i=1}^{k}w(v_{i - 1}, v_i)\) .

 Definition (The shortest path) A path \(p\) from a node \(u\) to a node \(v\) in a graph \(G\) is the shortest one, if its weight is the smallest of all possible paths from \(u\) to \(v\) in \(G\) . If no path from \(u\) to \(v\) exists, the shortest path’s weight is taken to be \(\infty\) .

Lecture Notes

261

In our example with navigation via Google Maps, a weight of a path correspond to the time of a certain connection. Choosing the shortest path corresponds to choosing a fastest route.

Some Properties of Paths The following properties hold of shortest paths in directed weighted graphs, and are easy to prove: • A subpath of a shortest path is a shortest path between the corresponding nodes. • The shortest graph always contains each node at most once. The definition of a shortest path also applies in the case of negative weights. However, it makes no sense in the presence of cycles with negative weight (explain, why).

Representing Shortest Paths Given a source node \(s\) in a graph \(G\) , we are interested in computing the shortest paths to all nodes of \(G\) that are reachable from \(s\) . This problem is known ad SSSP — Single-Source Shortest Paths . While discovering shortest paths, we will be representing the current knowledge about paths from \(s\) to other nodes by building a predecessor tree pred_tree . It can be represented via a hash table, in which each node \(v\) of the graph (serving as a key) will be pointing to the current predecessor node on a path from \(s\) to \(v\) or None , if no path is built yet. As we keep building the shortest paths, this information can change. The actual paths from \(s\) to any node \(v\) can be reconstructed by traversing the branches of the predecessor tree bottom-up and then reversing the obtained lists. It is also convenient to store the distance from the search root \(s\) to all nodes \(v\) is a separate structure, which we will call distance table ( dist_table ). Initially dist_table stores 0 for \(s\) and \(\infty\) for all other nodes. This information will evolve with the progression of the algorithms. In our implementation of graphs, we can encoding the weights of edges by piggy-backing on the labels for the graph edges. Therefore, we will need the following auxiliary definitions: open ReadingFiles open BST open BinarySearchTree include Reachability (* Get node payload for AL let get_ag_node_payload ag let open AdjacencyGraphs List.find (fun (x, _) ->

graph *) n = in x = n) !(ag.node_payloads) |> snd

Lecture Notes

262

(* Get edge label for AL graph *) let get_ag_edge_label ag s d = let open AdjacencyGraphs in List.find (fun ((x, y), _) -> s = x && y = d) !(ag.edge_labels) |> snd (* Get node payload for linked graph *) let get_linked_node_payload g n = let open LinkedGraphs in let node = NodeTable.get g.node_map n |> get_exn in !(node.value) (* Get edge label for AL-graph *) let get_linked_edge_label g s d = let open LinkedGraphs in EdgeTable.get g.edge_labels (s, d) |> get_exn The following modified function helps to visualise graphs with weights: let graphviz_with_weights g out = let open AdjacencyGraphs in let ag = LinkedGraphs.to_adjacency_graph g in let vattrib = get_ag_node_payload ag in let eattrib (s, d) = let l = get_ag_edge_label ag s d |> string_of_int in Printf.sprintf "[label=\"%s\", weight=\"%s\"]" l l in let s = graphviz_string_of_graph "digraph" " -> " vattrib eattrib ag in write_string_to_file out s For instance, consider the following example graph with named nodes and integer weights on its edges: let bf_example_nodes = [|"s"; "t"; "y"; "x"; "z"|] let bf_example_edges = [(0, 1); (0, 2); (1, 2); (1, 3); (1, 4); (2, 3); (2, 4); (3, 1); (4, 0); (4, 3)] let bf_example_labels = [(0, 1, 6); (0, 2, 7); (1, 2, 8); (1, 3, (2, 3, -3); (2, 4, 4); (3, 1, -2); (4, 0, let example_graph_bf = read_graph_and_payloads 5 bf_example_nodes bf_example_edges bf_example_labels

5); (1, 4, -4); 2); (4, 3, 7)]

263

Lecture Notes

Upon rendering it via graphviz_with_weights , we obtain the following plot:

Representing Distance When we only start looking for the paths, we don’t know what is the distance from \(s\) to other nodes, hence we need to over-approximate. For this we are going to be using the following “wrapper” type Distance.dist , which allows for representing infinite distances: module Distance = struct type dist = | Finite of int | Infinity

Lecture Notes

264 let | | |

( false Infinity -> true Finite y -> x < y

let () d1 d2 = not (d1 =) d1 d2 = not (d1 < d2)

let | | |

(+) d1 d2 = match (d1, d2) with Infinity, _ -> Infinity _, Infinity -> Infinity Finite x, Finite y -> Finite (x + y)

let int_of_dist d = match d with | Infinity -> raise (Failure "Cannot convert infinity to integer!") | Finite n -> n end Notice that we specifically arrange it as a separate module, in order to avoid clashes between the overloaded comparison operators and those defined automatically by OCaml (should we have relied on the latter ones, our further implementation would be incorrect!).

Initialisation and Relaxation All SSSP algorithms rely on the two main operations: • Initialising the predecessor tree and the distance table, and • Relaxing the path information about two nodes, by accounting for a found smaller distance between them. The first operation is implemented as follows. It takes a graph g (in a linked form), a source node s

and returns the weight function w , the predecessor tree and the distance table: let initialise_single_source g s = let open Distance in let n = v_size g in let dist_table = mk_new_table n in let prev_tree = mk_new_table n in for i = 0 to n - 1 do insert dist_table i Infinity; done; insert dist_table s (Finite 0); let w = get_linked_edge_label g in (w, dist_table, prev_tree)

Lecture Notes

265 The second operation relies on the auxiliary function dist : (* Get distance from the table *) let dist dist_table u = let open NodeTable in get_exn @@ get dist_table u

The function relax dist_table prev_tree w u v acts in the assumption that dist_table and prev_tree

record some partial information about the over-approximated shortest paths from s

to both u and v . It then checks if this information can benefit by taking the weight of the edge (u, v)

into the account. If it is the case, both the distance and the predecessor information is

updated: (* Relax the distance between u and v *) let relax dist_table prev_tree w u v = let open Distance in let vud = dist dist_table u + (Finite (w u v)) in if dist dist_table v > vud then begin insert dist_table v vud; insert prev_tree v u end The relaxation procedure satisfies the following property, which is crucial for the correctness of many SSSP algorithms:  Property (Path relaxation) If \(p = \langle v_0, v_1, \ldots, v_k \rangle\) is a shortest path from \(s = v_0\) to \(v_k\) , and we relax the edges of \(p\) in order \((v_0, v_1)\) , \((v_1, v_2)\) , etc. Then the distance to \ (v_k\) , as recorded in the distance table is the weight of the path \(p\) . In other words, any path of the length \(k\) or less can be discovered in \(k\) relaxations of the entire set \(E\) of edges.

Bellman-Ford Algorithm The Bellman-Ford algorithm builds on the path relaxation property. It is powered by observation that if n is the size of the set of nodes of the graph, and shortest path will have n or lest nodes in it (otherwise there are repetitions, which contradicts the fact that this is a shortest path). Therefore, having done n relaxations of the entire set of the nodes, we can discover the shortest paths by building the predecessor trees. This is doen as follows:

Lecture Notes

266

let bellman_ford g s = let open Distance in let (w, d, p) = initialise_single_source g s in let all_edges = elements g.edges in for i = 0 to v_size g - 1 do List.iter (fun (u, v) -> relax d p w u v) all_edges done; (* Check for negative cycles *) let rec check_neg_cycles es = match es with | [] -> true | (u, v) :: t -> if dist d v > dist d u + (Finite (w u v)) then false else check_neg_cycles t in ((p, d), check_neg_cycles all_edges) The algorithm works also on graphs with negative-weighted edges. As a bonus, it discovers whether the graph has negative cycles , in which case there is no shortest path (or its weight is \(\infty\) ). This is done by the call to check_neg_cycles , which checks if further relaxations can reduce some distances further (which would be impossible if there were no cycles). Notice that bellman_ford relies on the dist data types from the Distance module to operate with possibly infinite weights. Question: What is a complexity of bellman_ford in terms of g.V and g.E ?

Rendering Minimal Paths We can visualise the result of the algorithm by using the following function rendering a suitable GraphViz representation: let graphviz_with_min_paths path_calculuator g s out = let p = path_calculuator g s in let attrib (u, v) = let l = get_linked_edge_label g u v |> string_of_int in match get p v with | Some z when u = z -> Printf.sprintf "[label=\"%s\", color=red,penwidth=3.0]" l | _ -> Printf.sprintf "[label=\"%s\"]" l in let ag = LinkedGraphs.to_adjacency_graph g in let s = graphviz_string_of_graph "digraph" " -> " (get_linked_node_payload g) attrib ag in

Lecture Notes

267 write_string_to_file out s

let graphviz_with_bellman_ford = let pc g s = bellman_ford g s |> fst |> fst in graphviz_with_min_paths pc Running graphviz_with_bellman_ford example_graph_bf 0 "bf.dot" produces the following plot:

Dijkstra’s Algorithm Dijkstra’s algorithm has a better complexity that Bellman-Fort but only works on graphs with nonnegative edge weights. It is a greedy algorithm, that gradually explores the surroundings of the source node \(s\) , looking for the next node that will provide for the shortest paths. In doing so, it explores each node and edge just once, relying on the ongoing relaxation, recomputing the shortest paths as it goes:

Lecture Notes

268

(* Extract minimal distance in O(|remaining|) *) let extract_min_dist dist_table remaining = let open Distance in let res = ref None in let d = ref Infinity in List.iter (fun i -> let di = dist dist_table i in if di None | Some i -> begin remaining := List.filter (fun j -> i j) !remaining; !res end

let dijkstra g s = let (w, d, p) = initialise_single_source g s in (* Make queue of remaining uninspected nodes *) let q = ref (iota (v_size g - 1)) in while !q [] do let u = extract_min_dist d q |> get_exn in let adj = get_succ g u in List.iter (fun v -> relax d p w u v) adj done; (p, d) The procedure extract_min_dist takes the node with the minimal distance from s (initially, this is just s ) and removes it from the remaining list of nodes to be processed. After that it uses this node for relaxation of paths to all of its successors. This procedure is repeated until all nodes are processed. Question: The complexity of out implementation of dijkstra is \(O(|g.V|^2 + |g.E|)\) . Can you explain it? Dijkstra crucially relies on all weights on edges being non-negative . This way, adding an edge to a path can never make a it shorter (which is not the case with negative edges). This is why taking the shortest candidate edge (local optimality) always ends up being correct (global optimality). If that is not the case, the “frontier” of candidate edges does not send the right signals; a cheap edge might lure you down a path with positive weights while an expensive one hides a path with negative weights.

269

Lecture Notes

We can experiment with Dijkstra’s algorithm on the following graph: let graphviz_with_dijkstra = let pc g s = dijkstra g s |> fst in graphviz_with_min_paths pc let dijkstra_example_nodes = [|"s"; "t"; "y"; "x"; "z"|] let dijkstra_example_edges = [ (0, 1); (0, 2); (1, 2); (1, 3); (2, 1); (2, 3); (2, 4); (3, 4); (4, 0); (4, 3)] let dijkstra_example_labels = [(0, 1, 10); (0, 2, 5); (1, 2, 2); (1, 3, 1); (2, 1, 3); (2, 3, 9); (2, 4, 2); (3, 4, 4); (4, 0, 7); (4, 3, 6)] let example_graph_dijkstra = read_graph_and_payloads 5 dijkstra_example_nodes dijkstra_example_edges dijkstra_example_labels This results are shown in the following plot:

270

Lecture Notes

Testing Shortest-Path Algorithms The following functions help to retrieve the shortest paths from the predecessor tree and also compute the weight of a path: let get_shortest_path p s u = let rec walk acc v = match get p v with | None -> acc | Some x -> walk ((x, v) :: acc) x in let res = walk [] u in if u = s || res [] && (List.hd res |> get_exn |> fst = s) then Some res else None

Lecture Notes

271

let rec get_path_weigth g path = match path with | (u, v) :: t -> let w = get_linked_edge_label g u v in w + get_path_weigth g t | _ -> 0 Let us now distil some properties of the shortest paths in a form of a test. We will test an SSSP solution for the two given graphs by relying on the reachability facts derived before. Specifically, we will check that 1. A shortest path is a connected path. 2. The distance table correctly records the shortest path’s weight. 3. Each edge of a shortest path is an edge of a graph. 4. A shortest path from s exists for each node reachable from s . 5. A shortest path from s to u is no longer than an arbitrary path from s to u . This is covered by the following tests: open LinkedGraphs open NodeTable (* Test the following facts: * * * * *

p d g s u

-

predecessor tree distance table the graph source node destination node

*) (* 1. Path is connected *) let test_path_connected p d g s u = match get_shortest_path p s u with | None -> true | Some path -> let rec walk p = match p with | (u, v) :: (x, y) :: t -> v = x && walk ((x, y) :: t ) | _ -> true in walk path

Lecture Notes

272 (* 2. Path's weight is correctly recorded *) let test_path_weight p d g s u = match get_shortest_path p s u with | None -> true | Some path -> let w1 = get_path_weigth g path in let w2 = get_exn @@ get d u |> Distance.int_of_dist in w1 = w2 (* 3. Has all edges *) let test_that_is_path_graph p d g s u = match get_shortest_path p s u with | None -> true | Some path -> let all_edges = g.edges |> elements in List.for_all (fun e -> List.mem e all_edges) path (* 4. Exists for any reachable node *) let test_reachable_hence_has_path p d g s u = if is_reachable g s u then get_shortest_path p s u None else true (* 5. And is the shortest *) let test_shortest_is_shorter p d g s u = match reachable g s u with | None -> true | Some p1 -> match get_shortest_path p s u with | None -> false | Some p2 -> let w1 = get_path_weigth g p1 in let w2 = get_path_weigth g p2 in w2 List.iter (fun v -> let (p, d) = algo g u in assert (test_path_connected p d g u v); assert (test_path_weight p d g u v); assert (test_that_is_path_graph p d g u v); assert (test_reachable_hence_has_path p d g u v); assert (test_shortest_is_shorter p d g u v); ) all_nodes) all_nodes; true

(*

Testing Bellman-Ford

*)

Lecture Notes

273 let%test "Bellman-Ford-1" = let algo g s = bellman_ford g s |> fst in test_sssp algo example_graph_bf (* BF also works on Dijkstra-suitable graphs *) let%test "Bellman-Ford-2" = let algo g s = bellman_ford g s |> fst in test_sssp algo example_graph_dijkstra (*

Testing Dijkstra

*)

let%test "Dijkstra" = test_sssp dijkstra example_graph_dijkstra

Minimal Spanning Trees • File: Spanning.ml So far we have only looked at directed graphs, where each edge, besides the pair of connected nodes, also identified a direction . It is uncommon, however, to consider undirected graphs, where each edge establishes a symmetric link between two nodes.

Representing Undirected Graphs Undirected graphs can be encoded in the same way as directed ones, using the same data structures with either • certain amount of duplication, i.e., for each edge (u, v) ensuring that (v, u) is also prsent, or • some elaborate conventions in how the the topology is treated. For instance, when considering only successors for a directed graph, one should consider also predesessors. If the second way is adopted (this is what we are going to do), then one also needs to ensure that only one edge out of (u, v) or (v, u) is present. The standard practice is to store the edge that is smaller lexicographically. For instance, consider the following weighted undirected graph: let undirected_example_nodes = [|"a"; "b"; "c"; "d"; "e"; "f"; "g"; "h"; "i"|] let undirected_example_edges = [(0, 1); (0, 7); (1, 2); (1, 7); (2, 3); (2, 5); (2, 8); (3, 4); (3, 5); (4, 5); (5, 6); (6, 7); (6, 8); (7, 8)]

274

Lecture Notes

let undirected_example_labels = [(0, 1, 4); (0, 7, 8); (1, 2, 8); (1, 7, 11); (2, 3, 7); (2, 5, 4); (2, 8, 2); (3, 4, 9); (3, 5, 14); (4, 5, 10); (5, 6, 2); (6, 7, 1); (6, 8, 6); (7, 8, 7)] let example_graph_undirected = read_graph_and_payloads 9 undirected_example_nodes undirected_example_edges undirected_example_labels We can render it via the following procedure: open ReadingFiles include Paths let graphviz_with_weights_undirected g out = let open AdjacencyGraphs in let ag = LinkedGraphs.to_adjacency_graph g in let vattrib = get_ag_node_payload ag in let eattrib (s, d) = let l = get_ag_edge_label ag s d |> string_of_int in Printf.sprintf "[label=\"%s\", weight=\"%s\"]" l l in let s = graphviz_string_of_graph "graph" " -- " vattrib eattrib ag in write_string_to_file out s obtaining this beautiful plot:

275

Lecture Notes

Trees in Undirected Connected Graphs Fully connected undirected graph (i.e., graphs in which each node is reachable from another) are frequently used describe network topologies or electronic circuit layout. It is sometime convenient to find the minimal connected graph \(G'\) that spans the entire set \(G.V\) of nodes of the “main” graph \(G\) , while covering only a subset of the edges from \(G.E\) . This sub-graph turns out to be a tree , and can be characterised by ether of the following definitions: • A minimal (in terms of a number of edges) connected subgraph \(G'\) of \(G\)

276

Lecture Notes

• A connected subgraph \(G'\) of \(G\) , such that \(G'\) has no cycles • A connected subgraph \(G'\) of \(G\) such that \(|G'.E| = |G'.V| - 1\) The last observation is important: tree is a very sparse graph, where the number of edges is a number of nodes minus one. However, other definitions are useful, too.

Minimal Spanning Trees Given a weighted undirected graph \(G\) , we are interested in finding its Minimal Spanning Tree (MST), which is defined as follows:  Definition (Minimal Spanning Tree) Minimal Spanning Tree \(T\) of a graph \(G\) is a subset \(T \subseteq G.E\) , such that: (1) All nodes \(G.V\) are connected by edges in \(T\) , (2) The sum of the weights of edges in \(T\) is minimal (among other possible analogous subsets), and (3) \(T\) has no cycles, that is, it is a tree

Minimal spanning trees find many applications in: • Telecommunication networks design • Electronic circuit layout • Image segmentation Question: For any graph, is MST always unique?

Kruskal’s Algorithm Kruskal’s algorithm returns the result tree \(T\) as a list of edges (a corresponding undirected graph can be restored in linear time). The key step of the algorithm is sorting edges by their weight. The algorithm relies on the Union-Find structure for disjoint sets (cf. Chapter Equivalence Classes and Union-Find ). The algorithm first sorts all edges in an ascending other according to their weights. It then progressively fetches the edges and connectes the corresponding disjoint graphs. The following progression illustrates the main procedure on a simple graph example:

Lecture Notes

277

The listing of the algorithm is given below: open UnionFind open LinkedGraphs let mst_kruskal g = let open UnionFind in let forest = mk_UF (v_size g) in let tree = ref [] in let edges_sorted = Set.elements g.edges |> List.sort (fun (a, b) (x, y) -> let w1 = get_linked_edge_label g a b in let w2 = get_linked_edge_label g x y in if w1 < w2 then -1 else if w1 > w2 then 1 else 0) in List.iter (fun (u, v) -> let su = find forest u in let sv = find forest v in if su sv then begin tree := (u, v) :: !tree;

Lecture Notes

278 union forest u v end) edges_sorted; !tree Question: What is the complexity of the algorithm?

For our example above the algorithms results in the following output, obtained with the procedure graphviz_with_mst : let graphviz_with_mst algo g out = let t = algo g in let attrib (u, v) = let l = get_linked_edge_label g u v |> string_of_int in let b = List.exists (fun (x, y) -> x = u && y = v || x = v && y = u) t in if b then Printf.sprintf "[label=\"%s\", color=red,penwidth=3.0]" l else Printf.sprintf "[label=\"%s\"]" l in let ag = LinkedGraphs.to_adjacency_graph g in let s = graphviz_string_of_graph "graph" " -- " (get_linked_node_payload g) attrib ag in write_string_to_file out s let graphviz_with_kruskal = graphviz_with_mst mst_kruskal

279

Testing MST Construction The following simple tests checks one of the properties of the constructed MST: let%test "Testing MST size" = let t = mst_kruskal example_graph_undirected in List.length t = v_size example_graph_undirected - 1 Other properties are left for you to establish as a home exercise.

Lecture Notes

280

Lecture Notes

Other MST Algorithms In the interest of time, we only mention other popular MST algorithms: • Prim’s algorithm • Boruvka’s algorithm

Exercises

Exercise 1 Given a weighted directed graph, implement an algorithm to find a monotonic shortest path from a node s to any other node. A path is monotonic if the weight of its edges are either strictly increasing or strictly decreasing. Hint: think about the order in which the edges need to be relaxed. Implement tests for your algorithm and argue about its asymptotic complexity.

Exercise 2 Given a weighted directed graph, implement an algorithm to find a bitonic shortest path from a node s to any other node. A path is bitonic if there is an intermediate node v in it such that the weight of the edges on the path from s to v are strictly increasing and the weight on edges from v

to t (final path of a node) are strictly decreasing.

Week 13: Elements of Computational Geometry Basics of Computational Geometry Computational geometry (CG) is a sub-area of computer science that studies data structures and algorithms for solving geometric problems, which are common in computer vision, metallurgy, manufacturing, forestry, and navigation. Those problem as we know them from the high school, often have a “continuous” nature, so the main task of computational geometry is to reduce them to familiar abstractions, in terms of the data structures that we have already studies. In this last chapter of this course, we will take a look at some basic constructions of CG and will learn how to represent them graphically, as well as how to compute some of their intrinsic properties.

281

Lecture Notes

Working with graphics in OCaml • File GraphicUtil.ml There is not much fun in talking about geometric problems without being able to visualise them. To render the shapes and the results of the algorithms, we will employ OCaml’s Graphics package, which provides very primitive (yet sufficient for our needs) support for rendering twodimensional shapes. The detailed instructions on how to install and run the functions from the Graphics package are given in this demo repository . It is quite easy to start working with graphics. For instance running the following command opens a window that is 800 pixels wide and 600 tall: open Graphics;; open_graph " 800x600" The axes of the window start in the bottom left corner, and go right and up. It is more convenient to have our layout “centered” around the actual center of the image, so we could make us of the negative parts of the axes as well. This is why, when rendering our figures, we will always shift them with respect to a certain origin, which we choose to be the center of our 800x600 pixels window: let origin = (400, 300) let go_to_origin _ = let x = fst origin in let y = snd origin in moveto x y; set_color black In the function go_to_origin , the command moveto transfeers the “invisible” pointer to the origin, and sets the default drawing color to be Graphics.black (other colours are possible and can be set up using the RGB scheme). Using the following function we can draw a “grid” of the two axes, to help the orientation: let draw_axes _ = let x = fst origin in let y = snd origin in set_color green; moveto 0 y; lineto (x * 2) y;

Lecture Notes

282 moveto x 0; lineto x (y * 2); moveto x y; set_color black Using it, we can create a new window with the axes already drawn: let mk_screen _ = open_graph " 800x600"; draw_axes ()

Finally, we can remove everything but the axes by running the following function: let clear_screen _ = clear_graph (); draw_axes ()

283

Lecture Notes

Points, Segments and their Properties • File: Points.ml

On precision and epsilon-equality Geometrical objects in a cartesian 2-dimensional space are represented by the pairs of their coordinates \(x, y \in \mathbb{R}\) , which can be encoded in OCaml using the data type float . As the name suggests, this is the type for floating-point numbers, which can encode mathematical numbers with a finite precision. This is why ordinary equality should not be used on them. For instance, as a result of a numeric computation, we can obtain two numbers 0.3333333333 and 0.3333333334 , both “encoding” \(\frac{1}{3}\) , but somewhat approximating it in the former case and over-approximating it in a latter case. It is considered a usual practice to use an \ (\varepsilon\) -equality, when comparing floating-point numbers for equality. The following operations allow us to achieve this: let eps = 0.0000001 let (=~=) x y = abs_float (x -. y) < eps let (=~) x y = x =~= y || x > y let is_zero x = x =~= 0.0

Points on a two-dimensional plane A point is simply a pair of two floats, wrapped to a constructor to avoid possible confusions: type point = Point of float * float let get_x (Point (x, y)) = x let get_y (Point (x, y)) = y We can draw a point as a small circle (let’s say, with a radius of 3 pixels) using OCaml’s graphics capacities, via the following functions: include GraphicUtil let point_to_orig p = let Point (x, y) = p in let ox = fst origin in

Lecture Notes

284 let oy = snd origin in (* Question: why adding fst/snd origin? *) (int_of_float x + ox, int_of_float y + oy) let draw_point ?color:(color = Graphics.black) p = let open Graphics in let (a, b) = current_point () in let ix, iy = point_to_orig p in moveto ix iy; set_color color; fill_circle ix iy 3; moveto a b; set_color black Let us take some of the predefined points from this module: module TestPoints = struct let let let let let

p q r s t

= = = = =

Point Point Point Point Point

(100., 150.) (-50., 75.) (50., 30.) (75., 60.) (75., 90.)

end Drawing them as follows results in a picture below: utop utop utop utop utop utop utop utop

# # # # # # # #

open Points;; open TestPoints;; mk_screen ();; draw_point p;; draw_point q;; draw_point r;; draw_point s;; draw_point t;;

285

Lecture Notes

A very common operation is moving a point to a given direction, by adding certain x- and ycoordinates to it: let (++) (Point (x, y)) (dx, dy) = Point (x +. dx, y +. dy)

Points as vectors It is common to think of 2-dimensional points oas of vectors — directed segments, connecting the beginning of the coordinates with the point. We reflect it via the function that renders points as vectors: let draw_vector (Point (x, y)) = let ix = int_of_float x + fst origin in let iy = int_of_float y + snd origin in go_to_origin (); Graphics.lineto ix iy; go_to_origin ()

286

Lecture Notes

Notice that, in order to position correctly the vector, we keep “shifting” the point coordinates relatively to the graphical “origin”. We do so by adding fst origin and snd origin to the x/y coordinate of the point, correspondingly. The length of the vector induced by the point with the coordinates \((x, y)\) can be obtained as \ (|(x, y)| = \sqrt{x^2 + y^2}\) : let vec_length (Point (x, y)) = sqrt (x *. x +. y *. y) Another common operation is to subtract one vector from another ot obtain the vector that connects their ends: let (--) (Point (x1, y1)) (Point (x2, y2)) = Point (x1 -. x2, y1 -. y2)

Scalar product of vectors Imagine that we want to “turn” one vector in the direction of another. For this, we need to answer three questions: 1. How can we calculate the value of the angle? 2. How to perform the rotation? 3. Which direction to turn? The question (a) can be answered by computing the scalar product (often referred ) of the two points/vectors. By definition \((x_1, y_1) \cdot (x_2, y_2) = |(x_1, y_1) (x_2, y_2)|\cos{\theta} = x_1 \times x_2 + y_1 \times y_2\) , where \(\theta\) is the smaller angle between \((x_1, y_1)\) and \ ((x_2, y_2)\) . Therefore, we can calculate the scalar product as follows: let dot_product (Point (x1, y1)) (Point (x2, y2)) = x1 *. x2 +. y1 *. y2 Assuming neither of the two vectors is zero, we can calculate the angle using the function acos from OCaml’s library: let angle_between v1 v2 = let l1 = vec_length v1 in let l2 = vec_length v2 in if is_zero l1 || is_zero l2 then 0.0

Lecture Notes

287 else let p = dot_product v1 v2 in let a = p /. (l1 *. l2) in assert (abs_float a ~-.pi && phi' float_of_int in rotate_by_angle p1 (a *. d) Finally, given three points, p0 , p1 and p2 , one can use the operations of vector subtractions to determine in which direction the chain [p0; p1; p2] turns: let direction p0 p1 p2 = cross_product (p2 -- p0) (p1 -- p0) |> sign The direction depends on the result of of the function above: • If it is 1, the chain is turning turning right (clock-wise); • If it -1, it is turning left (counter-clock-wise); • 0 means there is no turn. For example, for the following image, the result of direction q r p is -1 :

290

Lecture Notes

Segments on a plane From individual points on a plain, we transition to segments, are simply the pairs of points: type segment = point * point The following definitions allow one to draw segments using our plotting frameworks, and also provide some default segments to experiment with: (* Draw a segment *) let draw_segment ?color:(color = Graphics.black) (a, b) = let open Graphics in draw_point ~color:color a; draw_point ~color:color b; let iax, iay = point_to_orig a in moveto iax iay; set_color color; let ibx, iby = point_to_orig b in lineto ibx iby; go_to_origin ()

291

Lecture Notes

module TestSegments = struct include TestPoints let s0 = (q, p) let s1 = (p, s) let s2 = (r, s) let s3 = (r, t) let s4 = (t, p) let s5 = (Point (-100., -100.), Point (100., 100.)) let s6 = (Point (-100., 100.), Point (100., -100.)) end

Generating random points on a segment It is easy to generate random points and segments within a given range f : let gen_random_point f = let ax = Random.float f in let ay = Random.float f in let o = Point (f /. 2., f /. 2.) in Point (ax, ay) -- o let gen_random_segment f = (gen_random_point f, gen_random_point f) We can exploit the fact that an point \(z\) on a segment \([p_1, p_2]\) and be obtained as \(z = p_1 + t (p_2 - p_1)\) for some \(0 \leq t \leq 1\) . here, both addition and subtraction are vector operations, encoded by (++) and (--) correspondingly: let gen_random_point_on_segment seg = let (p1, p2) = seg in let Point (dx, dy) = p2 -- p1 in let f = Random.float 1. in let p = p1 ++ (dx *. f, dy *. f) in p Let us experiment: utop # clear_screen ();; utop # let s = (Point (-300., -200.), Point (200., 248.));; utop # let z = gen_random_point_on_segment s;; val z : point = Point (51.3295884528682222, 114.791311253769891) utop # draw_segment s;; utop # draw_point ~color:Graphics.red z;;

292

Lecture Notes

Collinearity of segments Two segments are collinear (ie., belong to the same straight line), if each of the points of one segment forms a 0-turn (i.e., neither left, nor right) with the two points of another segment. Therefore, we can check the collinearity of two segments s1 and s2 as follows: (* Checking if segments are collinear *) let collinear s1 s2 = let (p1, p2) = s1 in let (p3, p4) = s2 in let d1 = direction p3 p4 p1 in let d2 = direction p3 p4 p2 in d1 = 0 && d2 = 0 A point p is on a segment [a, b] iff [a, p] and [p, b] are collinear, and both coordinates of p

lie between the coordinates of a and b . Let us leverage thins insight using in the following

checker:

Lecture Notes

293

(* Checking if a point is on a segment *) let point_on_segment s p = let (a, b) = s in if not (collinear (a, p) (p, b)) then false else let Point (ax, ay) = a in let Point (bx, by) = b in let Point (px, py) = p in min ax bx 0 && d4 < 0) then true else if d1 = 0 && point_on_segment s2 p1 then true else if d2 = 0 && point_on_segment s2 p2 then true else if d3 = 0 && point_on_segment s1 p3 then true else if d4 = 0 && point_on_segment s1 p4 then true else false

Finding intersections Sometimes we need to find the exact points where two segments intersect. In the case of collinear segments that intersect this is reduced to the enumeration of four possible options (at least one end of some segment should belong to another segment). The case of non-collinear segments [p1; p2] and [p3; p4] can be solved if each is represented in a form \(p_1 + t r\) and \(p_3 + u s\) , where \(t\) and \(s\) are the vectors connecting the endpoints of each segment correspondingly, and \(t\) and \(u\) are scalar values ranging from 0 to 1. We need to find \(t\) and u such that \(p_1 + t r = p_3 + u s\) . To solve this equation (which has two variables), we need to multiple both sides by, using the cross-product, by either \(r\) or \(s\) . In the former case we get \((p_1 + t r) \times s = (p_3 + u s) \times s\) . Since \(s \times s\) is a zero vector, we can get rid of the variable \(u\) , and find the desired \(t\) as in the implementation below: let find_intersection s1 s2 = let (p1, p2) = s1 in let (p3, p4) = s2 in if not (segments_intersect s1 s2) then None else if collinear s1 s2 then if point_on_segment s1 p3 then Some p3 else if point_on_segment s1 p4 then Some p4 else if point_on_segment s2 p1 then Some p1 else Some p2 else let r = Point (get_x p2 -. get_x p1, get_y p2 -. get_y p1) in let s = Point (get_x p4 -. get_x p3, get_y p4 -. get_y p3) in assert (not @@ is_zero @@ cross_product r s);

Lecture Notes

295 (* (p1 + t r) × s = (p3 + u s) × s, s x s = 0, hence t = (p3 − p1) × s / (r × s) *)

let t = (cross_product (p3 -- p1) s) /. (cross_product r s) in let Point (rx, ry) = r in let p = p1 ++ (rx *. t, ry *. t) in Some p We can graphically validate the result: utop # let s1 = (Point (113.756053827471192, -175.292497988606272), Point (18.0694083766823042, 124.535770332375932));; utop # let s2 = (Point (59.0722072343553464, -171.91124390306868), Point (139.282462974003465, 20.2804812244832249));; utop # draw_segment s1;; utop # draw_segment s2;; utop # let z = Week_01.get_exn @@ find_intersection s1 s2;; utop # draw_point ~color:Graphics.red z;;

296

Lecture Notes

Working with Polygons • File: Polygons.ml From points and segments we move to more interesting two-dimensional objects — polygons. To work with them, we will require a couple of auxiliary functions: include Points (* Some utility functions *) let rec all_pairs ls = match ls with | [] -> [] | _ :: [] -> [] | h1 :: h2 :: t -> (h1, h2) :: (all_pairs (h2 :: t)) let rec all_triples ls = let (a, b) = (List.hd ls, List.hd @@ List.tl ls) in let rec walk l = match l with | x :: y :: [] -> [(x, y, a); (y, a, b)]

Lecture Notes

297 | x :: y :: z :: t -> (x, y, z) :: (walk (y :: z :: t)) | _ -> [] in assert (List.length ls >= 3); walk ls (* Remove duplicates without sorting, via OCaml hashtable *) let uniq lst = let seen = Hashtbl.create (List.length lst) in List.filter (fun x -> let tmp = not (Hashtbl.mem seen x) in Hashtbl.replace seen x (); tmp) lst

Encoding and rendering polygons A polygon can be represented as a list of points: type polygon = point list We will use the following convention to interpret this list as a sequence of polygon vertices: as we “walk” along the list, the polygon is always on our left. OCaml’s representation of polygons uses the same convention. It is more convenient to define polygons as list of integers (unless we specifically need coordinates expressed with decimals), hence the following auxiliary function: let polygon_of_int_pairs l = List.map (fun (x, y) -> Point (float_of_int x, float_of_int y)) l A very common operation is to shift polygon in a certain direction. This can be done as follows: let shift_polygon (dx, dy) pol = List.map (function Point (x, y) -> Point (x +. dx, y +. dy)) pol OCaml provides a special function draw_poly to render polygons, and we implement our machinery relying on it: let draw_polygon ?color:(color = Graphics.black) p = let open Graphics in set_color color; let ps_array = list_to_array @@ List.map point_to_orig p in

Lecture Notes

298 draw_poly ps_array; set_color black

Some useful polygons The following module defines a number of polygons with interesting properties: module TestPolygons = struct let triangle = [(-50, 50); (200, 0); (200, 200)] |> polygon_of_int_pairs let square = [(100, -100); (100, 100); (-100, 100); (-100, -100)] | > polygon_of_int_pairs let convexPoly2 = [(100, -100); (200, 200); (0, 200); (0, 0)] |> polygon_of_int_pairs let convexPoly3 = [(0, 0); (200, 0); (200, 200); (40, 100)] |> polygon_of_int_pairs let simpleNonConvexPoly = [(0, 0); (200, 0); (200, 200); (100, 50)] |> polygon_of_int_pairs let nonConvexPoly5 = [(0, 0); (0, 200); (200, 200); (-100, 300)] |> polygon_of_int_pairs |> shift_polygon (-50., -100.) let bunnyEars

= [(0, 0); (400, 0); (300, 200); (200, 100); (100, 200)] |> polygon_of_int_pairs |> shift_polygon (-100., -50.)

let lShapedPolygon = [(0, 0); (200, 0); (200, 100); (100, 100); (100, 300); (0, 300)] |> polygon_of_int_pairs |> shift_polygon (-150., -150.) let kittyPolygon = [(0, 0); (500, 0); (500, 200); (400, 100); (100, 100); (0, 200)] |> polygon_of_int_pairs |> shift_polygon (-250., -150.) let simpleStarPolygon = [(290, 0); (100, 100); (0, 290); (-100, 100); (-290, 0); (-100, -100); (0, -290); (100, -100)] |> polygon_of_int_pairs

Lecture Notes

299

let weirdRectPolygon = [(0, 0); (200, 0); (200, 100); (100, 100); (100, 200); (300, 200); (300, 300); (0, 300)] |> polygon_of_int_pairs |> shift_polygon (-150., -150.) let sand4 = [(0, 0); (200, 0); (200, 100); (170, 100); (150, 40); (130, 100); (0, 100)] |> polygon_of_int_pairs |> shift_polygon (-30., -30.) let tHorror = [(100, 300); (200, 100); (300, 300); (200, 300); (200, 400)] |> polygon_of_int_pairs |> shift_polygon (-250., -250.)

let chvatal_comb = [(500, 200); (455, 100); (400, 100); (350, 200); (300, 100); (250, 100); (200, 200); (150, 100); (100, 100); (50, 200); (0, 0); (500, 0)] |> polygon_of_int_pairs |> shift_polygon (-200., -70.)

let chvatal_comb1 = [(500, 200); (420, 100); (400, 100); (350, 200); (300, 100); (250, 100); (200, 200); (150, 100); (100, 100); (50, 200); (0, 70); (500, 70)] |> polygon_of_int_pairs |> shift_polygon (-200., -70.) let shurikenPolygon = [(390, 0); (200, 50); (0, 290); (50, 150); (-200, -100); (0, 0)] |> polygon_of_int_pairs |> shift_polygon (-80., -70.)

end Let us render some of those: utop utop utop utop utop utop

# # # # # #

open Polygons;; open TestPolygons;; mk_screen ();; draw_polygon kittyPolygon;; let k1 = shift_polygon (50., 50.) kittyPolygon;; draw_polygon k1;;

300

Lecture Notes

Basic polygon manipulations In addition to moving polygons, we can also resize and rotate polygons. The first operation is done by multiplying all vertices (as they were vectors) by the defined factor: let resize_polygon k pol = List.map (function Point (x, y) -> Point (x *. k, y *. k)) pol For rotation, we need to specify the center, relative to which the rotations is going to be performed. After that the conversion to polar coordinates and back does the trick: let rotate_polygon pol p0 angle = pol |> List.map (fun p -> p -- p0) |> List.map polar_of_cartesian |> List.map (function Polar (r, phi) -> Polar (r, phi +. angle)) |>

Lecture Notes

301 List.map cartesian_of_polar |> List.map (fun p -> p ++ (get_x p0, get_y p0)) Here is an example of using thoe functions: utop # let k2 = rotate_polygon k1 (Point (0., 0.)) (pi /. 2.);; utop # clear_screen ();; utop # draw_polygon k2;;

Queries about polygons One of non-trivial properties of a polygon is convexity . A polygon is convex if any segment connecting points on its edges fully lies within the polygon. That is, checking convexity out of this definition is cumbersome, and there is a better way to do it, by relying one the machinery for determining directions. In essence, a polygon is convex if each three consecutive vertices in it do not form a right turn:

302

Lecture Notes

let is_convex pol = all_triples pol |> List.for_all (fun (p1, p2, p3) -> direction p1 p2 p3 List.hd in let e = (lst, List.hd pol) in e :: es let polygons_touch_or_intersect pol1 pol2 = let es1 = edges pol1 in let es2 = edges pol2 in List.exists (fun e1 -> List.exists (fun e2 -> segments_intersect e1 e2) es2) es1

Intermezzo: rays and intersections The procedure above only checks for intersection of edges, but what is one polygon is fully within another polygon? How can we determine that? To answer this question, we would need to be able to determine whether a certain point is within a given polygon. But for this we would need to make a small detour and talk about another geometric construction: rays. Ray is similar to a segment, but only has one endpoint, spreading to the infinity in a certain direction. This is why we represent rays by its origin and an angle in radians (encoded as float ), determining the direction in which it spreads: type ray = point * float let draw_ray ?color:(color = Graphics.black) r = let (p, phi) = r in let open Graphics in let q = p ++ (2000. *. (cos phi), 2000. *. (sin phi)) in draw_segment ~color (p, q) Given a ray \(R = (p, \phi)\) and a point \(p\) that belongs to the line of the ray, we can determine whether \(p\) is on \(r\) by means of the following function:

Lecture Notes

303

let point_on_ray ray p = let (q, phi) = ray in (* Ray's direction *) let r = Point (cos phi, sin phi) in let u = dot_product (p -- q) r in u >=~ 0. Notice that here we encode all points of \(R\) via the equation \(q + u r\) , where \(r\) is a “directional” vector of the ray and \(0 \leq u\) . We then solve the vector equation \(p = q + u r\) , by multiplying both parts by \(r\) via scalar product, and also noticing that \(r \cdot r = 1\) . Finally, we check if \(u \geq 0\) , to make sure that \(p\) is not lying “behind” the ray. Now, we can find an intersection of a ray and a segment, in a way similar to how that was done in Section Points, Segments and their Properties : let ray_segment_intersection ray seg = let (p, p') = seg in let (q, phi) = ray in (* Segment's direction *) let s = Point (get_x p' -. get_x p, get_y p' -. get_y p) in (* Ray's direction *) let r = Point (cos phi, sin phi) in (* Ray and Segment are parallel *) if cross_product s r =~= 0. then (* Ray and Segment are collinear *) if cross_product (p -- q) r =~= 0. then if point_on_ray ray p then Some p else if point_on_ray ray p' then Some p' else None else None else begin (* Point on segment *) let t = (cross_product (q -- p) r) /. (cross_product s r) in (* Point on ray *) let u = (cross_product (p -- q) s) /. (cross_product r s) in if u >=~ 0. && t >=~ 0. && t = 3); if v = arr.(0) then (arr.(n - 1), arr.(1)) else if v = arr.(n - 1) then (arr.(n - 2), arr.(0)) else let rec walk i = if i = n - 1 then (arr.(n - 2), arr.(0)) else if v = arr.(i) then (arr.(i - 1), arr.(i + 1)) else walk (i + 1) in walk 1 (* Get neightbors of a vertex *) let neighbours_on_different_sides ray pol p = if not (List.mem p pol) then true else let (a, b) = get_vertex_neighbours pol p in let (r, d) = ray in let s = r ++ (cos d, sin d) in let dir1 = direction r s a in let dir2 = direction r s b in dir1 dir2 To avoid conrer cases, it makes sense to cast the ray from p in a way that so it would not be collinear with any of the edges and not pass through any vertices. For this, we will need the following function: let choose_ray_angle pol p = let Point (xp, yp) = p in let edge_angles = edges pol |>

Lecture Notes

305 List.map (fun (Point (x1, y1), Point (x2, y2)) -> let dx = x2 -. x1 in let dy = y2 -. y1 in atan2 dy dx) in let vertex_angles = pol |> List.map (fun (Point (x1,y1)) -> let dy = y1 -. yp in let dx = x1 -. xp in atan2 dy dx) in let n = 2 * (List.length pol) + 1 in let candidate_angles = iota (n + 1) |> List.map (fun i -> (float_of_int i) *. pi /. (float_of_int n)) in let phi = List.find (fun c -> List.for_all (fun a -> not (a =~= c)) edge_angles && List.for_all (fun a -> not (a =~= c)) vertex_angles) candidate_angles in phi Now, we can determine whether the point is within the polygon: (* Point within a polygon *) let point_within_polygon pol p = let ray = (p, (choose_ray_angle pol p)) in let es = edges pol in if List.mem p pol || List.exists (fun e -> point_on_segment e p) es then true else begin let n = edges pol |> List.map (fun e -> ray_segment_intersection ray e) |> List.filter (fun r -> r None) |> List.map (fun r -> get_exn r) |> (* Intersecting a vertex *) uniq |> (* Touching vertices *) List.filter (neighbours_on_different_sides ray pol) |> (* Compute length *) List.length in n mod 2 = 1 end

306

Lecture Notes

A few corner cases have to be taken into the account: 1. A ray may contain the entire edge of the polygon. 2. A ray may “touch” a sharp vertex — in this case this intersection should not count. However, if a ray “passes” through a vertex (as opposed to touching it), this should count as an intersection. The case (a) does not happen, as we have chosen the ray to be not collinear with any of th edges. In the case (b) case, duplicating intersections need to be removed first, hence the use of uniq . The configuration can be detected by checking whether two adjacent edges to the node suspected in “touching” lie on the single side or on two opposite sides of the ray. Only the second case (detected via neighbours_on_different_sides ) needs to be accounted. We can test our procedure on the following polygon: utop # let pol = TestPolygons.sand4;; utop # let p = Point (-150., 10.);; utop # let q = Point (50., 10.);; utop # let r = Point (-150., 70.);; utop # let s = Point (120., 70.);; utop # point_within_polygon pol p;; - : bool = false utop # point_within_polygon pol q;; - : bool = true utop # point_within_polygon pol r;; - : bool = false utop # point_within_polygon pol s;; - : bool = false

307

Lecture Notes

Convex Hulls • File: ConvexHulls.ml In the last section of our brief introduction to problems, techniques, and algorithms of computational geometry, we take a look at a very simple yet practically relevant problem: constructing a convex hull of a set of points. A convex hull for a set of points \(S\) is a smalles polygon (in terms of are) such that all points from \(S\) are contained within it, or are lying on its boundary. The definition implies that the vertices of the polygons are some points from \(S\) . The image below shows an example of a convex hull for a set of 50 points:

Lecture Notes

308

Plane-sweeping algorithm Let us study the following elegant algorithm, known as Graham scan , for computing a convex hull fora set of two-dimensional points. The algorithm relies on sorting and implements a “plane-sweeping” intuition by considering all points in a certain sequence, making sure to include only those to the hull-in-construction, that do not disrupt the convexity property. Our construction of a convex hull will rely on imperative stacks, which we will extend with the following definitions: include Polygons open Stacks module StackX (S: AbstractStack) = struct include S let top s = match pop s with | None -> None | Some x -> push s x;

Lecture Notes

309 Some x let next_to_top s = match pop s with | None -> None | Some x -> let y = top s in push s x; y let list_of_stack s = let res = ref [] in while not (is_empty s) do let e = Util.get_exn @@ pop s in res := e :: !res done; !res end The crux of the algorithm is to sort the set of points twice, using different comparisons: 1. The first sort is done by \(y\) -coordinate;

2. The second sort is done by a radial angle from the point \(p_0\) with a smallest \(y\) coordinate (and leftmost of points with the smallest \(y\) -coordinate). For these sortings, we will need the following two procedures: (* Sort by axis Y let axis_y_sorter if y1 < y2 then else if x1 < x2 else 0

*) (Point (x1, y1)) (Point (x2, y2)) = -1 else if y1 > y2 then 1 then -1 else if x1 > x1 then 1

(* Sort by polar angle wrt p0 *) let polar_angle_sorter p0 p1 p2 = let Polar (r1, a1) = p1 -- p0 |> polar_of_cartesian in let Polar (r2, a2) = p2 -- p0 |> polar_of_cartesian in if a1 < a2 then -1 else if a1 > a2 then 1 else if r1 < r2 then -1 else if r1 > r2 then 1 else 0 The main Graham algorithm is as follows: (* Graham's Scan *) let convex_hull points = (* At least three points *) assert (List.length points >= 3);

Lecture Notes

310 let y_sorted = List.sort axis_y_sorter points in let p0 = y_sorted |> List.hd in match List.tl y_sorted |> List.sort (polar_angle_sorter p0) with | p1 :: p2 :: rest -> let open CHStack in let s = mk_stack 0 in push s p0; push s p1; push s p2; let non_left_turn p = let q1 = next_to_top s |> get_exn in let q2 = top s |> get_exn in direction q1 q2 p >= 0 in (* Main algorithm *) List.iter (fun p -> while non_left_turn p do ignore (pop s) done; push s p) rest; list_of_stack s | _ -> error "Cannot happen"

The main loop checks all the vertices in the order of the increasing angle from p0 , making sure that the currently build convex hull for a subset is convex. It removes the points that violat this invariants from the Question: What is the complexity of the procedure in terms of the size n of the set of points?

Graham scan invariant A moment from the middle of Graham scan is shown on a picture below:

311

Lecture Notes

It is easy to see that the invariant of the main loop is that all points in the stack s always correspond to a convex polygon containing all the points observed so far. Hence, by induction, the resulting polygon is also convex. Since we have ways to check both these properties (convexity and containment of a point in a polygon, we can engineer the following tests for Graham scan). First, let us generate a set of random points of a fixed size n : let gen_random_points ?dim:(dim = 550.) n = let res = ref [] in for _ = 0 to n - 1 do let p = gen_random_point dim in res := p :: !res done; !res Second, let us use it in a randomised test:

312

Lecture Notes

open ConvexHulls let test_random_ch n = let ps = gen_random_points n in let ch = convex_hull ps in assert (is_convex ch); assert (List.for_all (point_within_polygon ch) ps)

let%test _ = for _ = 0 to 100 do test_random_ch 50 done; true

Final Project: Vroomba Programming • Final project starter code The final project will consist of two parts: team-based coding assignments and individual implementation reports.

Coding Assignment In these difficult times, it is particularly important to keep our living spaces clean and tidy. To help with this task, the researchers from NUS Faculty of Engineering have designed a new advanced cleaning robot called Vroomba [ 1 ] . In this project, you will have to develop a series of algorithms for navigating a Vroomba robot across arbitrary spaces so it could clean them. The catch is: you will have to strive to minimise the number of “moves” the Vroomba needs to make it to do its job. A room is represented by a two-dimensional rectilinear polygon with all coordinates being integer values. Vroomba occupies one square 1x1 , and its position is formally defined to be the bottom left corner of this square. Vroomba instantly cleans the space in the square it is located. In addition to that, its mechanical brushes can clean the eight squares adjacent to its current position. Unfortunately, the manipulators cannot spread through the walls or “wrap” around the corners.

Lecture Notes

313

Your goal in this task is to compute for a Vroomba robot that starts the job at the position (0, 0) , as good as possible route to clean the entire area of room. For example, consider a room defined

as

the

polygon

with coordinates (0, 0); (6, 0); (6, 1); (8, 1); (8, 2); (6, 2); (6, 3); (0, 3) and shown on the image

below:

In order to clean the entire room the Vroomba positioned initially in the coordinate (0, 0) can move by following the route defined by the string of moves WDDDDDD (all characters are capital), where W makes Vroomba move one square up, D moves it right, S is “down”, and A is “left”. The figure above shows an initial (0, 0) , some intermediate (4, 1) , and the final (6, 1) positions of the Vroomba following this route. Notice that there is was no need for the robot to step to any other squares, as it brushes cleaned the remaining parts of the room, as it is following the rout. The suggested route is a valid one for this room, as it (a) does not force the Vroomba to go outside the room boundaries, and (b) by following it, the Vroomba will have cleaned all the squares in the room. Indeed, for more complex rooms the routes are going to be longer and potentially use all four move commands in some sequence.

314

Lecture Notes

When tackling this project, you should strive to find, for a given arbitrary room, a valid Vroomba route that is as short as possible (the length of a route is the length of the corresponding string). While it might be difficult to find the most optimal (i.e., the shortest) route, please, do your best to come up with a procedure that finds a “reasonably” good solution, for instance, it does not make the Vroomba to move into every single square of the room, but relies on the range of its brushes instead. Even the best solution might require amount of back-tracking, forcing the Vroomba to step on the same square more than once. While your procedure is allowed to be computationally expensive (and you should explain the sources of its complexity in the report), it should terminate in a reasonable time (within 20 seconds) for the ten rooms from the provided test file. The template GitHub project (link available on Canvas) provides a README.md file with an extensive technical specification of the sub-tasks of this project, as well as a number of hints and suggestions on splitting the workload within the team.

Report The reports are written and submitted on Canvas individually. They should focus on the following aspects of your experience with the project: • High-level overview of your implementation design. How did you define basic data structures, what were the algorithmic decisions you’ve taken? Please, don’t quote the code verbatim at length (you may provide 3-4 line code snippets, if necessary). Pictures, screenshots, and drawings are very welcome, but are not strictly required. • What were your Vroomba solver strategies, interesting polygon generation patterns, or game part enhancements? How do you estimate the complexity of your solver as a function of the size of a room (number of 1x1 squares in it)? • What you considered important properties of your implementation? How did you test them? • How the implementation effort has been split, and what were your personal contributions? Did you make use of the suggested split? • Any discoveries, anecdotes, and gotchas, elaborating on your experience with this project. Your individual report should not be very long; please, try to make it succinct and to the point: 3-4 pages should be enough. [1] Any relation to the existing products or trademarks is accidental.

315

Slides and Supplementary Materials

Slides and Supplementary Materials • Week 01 Introductory Slides • Matrix determinants in Haskell • Week 04 Slides on Advanced Sorting • Week 05 Slides on Best-Worst Sorting Complexity • Week 09 Slides on Hamiltonian Paths • Board shots from the Backtracking lecture • Week 13 Slides (Wrapping Up)

316

Examples and Code • Code from the lectures • Example OCaml project with graphics • Midterm project starter code • Final project starter code • Sources of these lecture notes Pull requests are welcome!

Examples and Code

317

Textbooks

Textbooks On Algorithms and Data Structures 1. Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein 2. (Optional) Algorithms, 4th edition by Robert Sedgewick and Kevin Wayne. This book has some great in-depth examples of basic algorithms. On OCaml 1. Real World OCaml by Yaron Minsky and Anil Madhavapeddy

© Copyright 2021, Ilya Sergey. Created using Sphinx 7.2.6.