Introduction to Algorithms & Data Structures 3: Learn Linear Data Structures with Videos & Interview Questions 9791222443188

This playbook is the third volume of the series Introduction to Algorithms & Data Structures. It is written in the f

153 8 6MB

English Pages 271 Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
0. What You Get
0.1. Requirements
0.2. Benefits of Learning Algorithms and Data Structures
0.3. How This Course is Structured
1. Introduction to Linear Data Structures
1.1. What is an Algorithm?
1.2. What is Data Structure?
2. The Big O Notation
2.1. What is Big O? – 00:00 min.
2.2. O(1) – 01:59 min.
2.3. O(n) – 03:28 min.
2.4. – 07:13 min.
2.5. O(log n) – 09:37 min.
2.6. – 12:16 min.
2.7. Space Complexity – 13:07 min.
3. Arrays
3.1. Introduction – 00:00 min.
3.2. Arrays – 00:45 min.
3.2.1. How an Array Stores Items
3.2.2. Limitations/Weaknesses of Arrays
3.3. Working with Arrays – 03:53 min.
3.4. Exercise 1: Building an Array – 07:23 min.
3.5. Solution 1: Building the Array – 10:15 min.
3.5.1. Public Class
3.5.2. Private Class
3.6. Solution 2: Insert Method – 13:34 min.
3.7. Solution 3: Remove Method – 17:54 min.
3.7.1. Validation of Index
3.8. Solution 4: Search (IndexOf) Method – 22:45 min.
3.9. Dynamic Arrays – 25:13 min.
3.9.1. Overview of the Vector Class
3.10. Wrap up – 29:02 min.
3.10.1. Runtime Complexities of Various Operations
4. Linked Lists
4.1. Introduction – 00:00 min.
4.2. What Are Linked Lists? – 00:37 min.
4.3. Working with Linked Lists – 05:10 min.
4.4. Exercise 2: Building a Linked List – 08:34 min.
4.5. Solution: Building a Linked List – 09:59 min.
4.6. Implementing Size – 30:27 min.
4.7. Converting Linked Lists to Arrays – 34:42 min.
4.8. Cheat Sheets – 36:53 min.
4.9. Arrays vs Linked Lists – 38:05 min.
4.10. Types of Linked Lists – 41:26 min.
4.11. Exercise 3: Reversing a Linked List – 44:41 min.
4.12. Solution: Reversing a Linked List – 46:14 min.
4.13. Exercise 4: Kth Node from the End – 55:15 min.
4.14. Solution: Kth Node from the End – 58:35 min.
4.15. Wrap up – 1:03:57 min.
5. Stacks
5.1. Introduction – 00:00 min.
5.2. What are Stacks? – 00:32 min.
5.2.1. Structure of Stacks
5.2.2. Operations that Stacks Support
5.3. Working with Stacks – 03:19 min.
5.4. Exercise 5: Reversing a String with Stack – 05:40 min.
5.5. Solution: Reversing a String with Stack – 06:21 min.
5.6. Exercise 6: Balanced Expressions – 11:22 min.
5.7. Solution 1: A Basic Implementation – 14:17 min.
5.8. Solution 2: Supporting Multiple Brackets – 19:35 min.
5.9. Solution 3: First Refactoring – 23:11 min.
5.10. Solution 4: Second Refactoring – 27:20 min.
5.11. Exercise 7: Implementing a Stack from Scratch – 33:12 min.
5.12. Solution: Implementing a Stack from Scratch – 33:59 min.
5.13. Wrap up – 42:17 min.
5.13.1. Key Points About Stacks
6. Queues
6.1. Introduction – 00:27 min.
6.1.1. Applications of Queues
6.1.2. Common Methods for Implementing Queues:
6.1.3. Operations and Runtime Complexities of Queues
6.2. Working with Queues – 02:31 min.
6.3. Exercise 8: Reversing a Queue – 07:44 min.
6.4. Solution: Reversing a Queue – 08:50 min.
6.5. Exercise 9: Building a Queue Using Arrays – 11:07 min.
6.6. Solution 1: A Basic Implementation – 13:11 min.
6.7. Solution 2: Using Circular Arrays – 19:43 min.
6.7.1. Circular Queue Operations
6.8. Exercise: Building a Queue Using Stacks – 25:37 min.
6.9. Solution: Building a Queue Using Stacks – 26:32 min.
6.10. Priority Queues – 34:15 min.
6.10.1. Key Features of a Priority Queue
6.10.2. Applications of Priority Queues
6.11. Exercise: Building a Priority Queue – 36:09 min.
6.12. Solution 1: Building a Priority Queue – 40:06 min.
6.13. Solution 2: Refactoring Our Code – 48:57 min.
6.14. Wrap up – 51:59 min.
7. Hash Tables
7.1. Introduction – 00:00 min.
7.2. What are Hash Tables – 00:27 min.
7.2.1. Benefits & Applications of Hash Tables
7.2.2. How Hash Tables Work
7.2.3. Operations Supported by Hash Tables
7.3. Working with Hash Tables – 03:11 min.
7.4. Exercise: First Non-repeated Character – 09:18 min.
7.5. Solution: First Non-repeated Character – 10:13 min.
7.6. Sets – 17:52 min.
7.7. Exercise: First Repeated Character – 20:16 min.
7.8. Exercise: First Repeated Character – 20:48 min.
7.9. Hash Functions – 23:24 min.
7.9.1. How HashCode Works
7.10. Collisions – 29:19 min.
7.10.1. How to Handle Collisions
7.11. Chaining – 30:26 min.
7.12. Linear Probing – 32:06 min.
7.12.1. Linear Probing
7.13. Quadratic Probing – 34:48 min.
7.14. Double Hashing – 36:17 min.
7.14.1. Review of All the Probing Algorithms Theory
7.15. Exercise: Building a Hash Table – 39:37 min.
7.16. Solution: put( ) – 42:14 min.
7.17. Solution: get( ) – 48:21 min.
7.18. Solution: remove( ) – 52:51 min.
7.19. Solution: Refactoring & Automated Testing– 55:22 min.
7.20. Wrap up – 1:06:26 min.
7.21. Coming up Next
7.22. How to Download Tutorial Videos & Other Resources
Recommend Papers

Introduction to Algorithms & Data Structures 3: Learn Linear Data Structures with Videos & Interview Questions
 9791222443188

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

  Introduction to

  Algorithms &

  Data Structures 3

  Learn Linear Data Structures with

  Videos & Interview Questions

 

  Bolakale Aremu

  Charles Johnson Jr.

  Ojula Technology Innovations

  ___________________

 

___________________

  This is an electronic version of the print textbook. Due to electronic rights restrictions, some third-party content may be suppressed. Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. The publisher reserves the right to remove content from this title at any time if subsequent rights restrictions require it. For valuable information on pricing, previous editions, changes to current editions, and alternate formats, please contact [email protected] and inquire ISBN number, author, or title for materials in your areas of interest.

 

  Introduction to Algorithms & Data Structures 3

  First Edition

 

  © 2023 Ojula Technology Innovations®

 

9791222443188

  All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented.

  Ojula Technology Innovations is a leading provider of customized learning solutions with employees residing in nearly 45 different countries and sales in more than 130 countries around the world. For more information, please contact

  Printed in the United States of America

  Print Number: 01

  Print Year: September 2023

 

I am indebted to all the contributors of this great work for their hard work and support throughout the time of producing this book and the accompanying videos.

  Bolakale Aremu

 

Table of Contents

  0. What You Get

  0.1. Requirements

  0.2. Benefits of Learning Algorithms and Data Structures

  0.3. How This Course is Structured

  1. Introduction to Linear Data Structures

  1.1. What is an Algorithm?

  1.2. What is Data Structure?

  2. The Big O Notation

  2.1. What is Big O? – 00:00 min.

  2.2. O(1) – 01:59 min.

  2.3. O(n) – 03:28 min.

  2.4. – 07:13 min.

 

2.5. O(log n) – 09:37 min.

  2.6. – 12:16 min.

  2.7. Space Complexity – 13:07 min.

  3. Arrays

  3.1. Introduction – 00:00 min.

  3.2. Arrays – 00:45 min.

  3.2.1. How an Array Stores Items

  3.2.2. Limitations/Weaknesses of Arrays

  3.3. Working with Arrays – 03:53 min.

  3.4. Exercise 1: Building an Array – 07:23 min.

  3.5. Solution 1: Building the Array – 10:15 min.

  3.5.1. Public Class

  3.5.2. Private Class

  3.6. Solution 2: Insert Method – 13:34 min.

  3.7. Solution 3: Remove Method – 17:54 min.

  3.7.1. Validation of Index

  3.8. Solution 4: Search (IndexOf) Method – 22:45 min.

  3.9. Dynamic Arrays – 25:13 min.

  3.9.1. Overview of the Vector Class

  3.10. Wrap up – 29:02 min.

  3.10.1. Runtime Complexities of Various Operations

  4. Linked Lists

  4.1. Introduction – 00:00 min.

  4.2. What Are Linked Lists? – 00:37 min.

  4.3. Working with Linked Lists – 05:10 min.

  4.4. Exercise 2: Building a Linked List – 08:34 min.

  4.5. Solution: Building a Linked List – 09:59 min.

  4.6. Implementing Size – 30:27 min.

  4.7. Converting Linked Lists to Arrays – 34:42 min.

  4.8. Cheat Sheets – 36:53 min.

  4.9. Arrays vs Linked Lists – 38:05 min.

  4.10. Types of Linked Lists – 41:26 min.

  4.11. Exercise 3: Reversing a Linked List – 44:41 min.

  4.12. Solution: Reversing a Linked List – 46:14 min.

  4.13. Exercise 4: Kth Node from the End – 55:15 min.

  4.14. Solution: Kth Node from the End – 58:35 min.

  4.15. Wrap up – 1:03:57 min.

  5. Stacks

  5.1. Introduction – 00:00 min.

  5.2. What are Stacks? – 00:32 min.

  5.2.1. Structure of Stacks

  5.2.2. Operations that Stacks Support

  5.3. Working with Stacks – 03:19 min.

  5.4. Exercise 5: Reversing a String with Stack – 05:40 min.

  5.5. Solution: Reversing a String with Stack – 06:21 min.

  5.6. Exercise 6: Balanced Expressions – 11:22 min.

  5.7. Solution 1: A Basic Implementation – 14:17 min.

  5.8. Solution 2: Supporting Multiple Brackets – 19:35 min.

  5.9. Solution 3: First Refactoring – 23:11 min.

  5.10. Solution 4: Second Refactoring – 27:20 min.

  5.11. Exercise 7: Implementing a Stack from Scratch – 33:12 min.

  5.12. Solution: Implementing a Stack from Scratch – 33:59 min.

  5.13. Wrap up – 42:17 min.

  5.13.1. Key Points About Stacks

  6. Queues

  6.1. Introduction – 00:27 min.

  6.1.1. Applications of Queues

  6.1.2. Common Methods for Implementing Queues:

  6.1.3. Operations and Runtime Complexities of Queues

  6.2. Working with Queues – 02:31 min.

  6.3. Exercise 8: Reversing a Queue – 07:44 min.

  6.4. Solution: Reversing a Queue – 08:50 min.

  6.5. Exercise 9: Building a Queue Using Arrays – 11:07 min.

  6.6. Solution 1: A Basic Implementation – 13:11 min.

  6.7. Solution 2: Using Circular Arrays – 19:43 min.

  6.7.1. Circular Queue Operations

  6.8. Exercise: Building a Queue Using Stacks – 25:37 min.

  6.9. Solution: Building a Queue Using Stacks – 26:32 min.

  6.10. Priority Queues – 34:15 min.

  6.10.1. Key Features of a Priority Queue

  6.10.2. Applications of Priority Queues

  6.11. Exercise:  Building a Priority Queue – 36:09 min.

  6.12. Solution 1:  Building a Priority Queue – 40:06 min.

  6.13. Solution 2:  Refactoring Our Code – 48:57 min.

  6.14. Wrap up – 51:59 min.

  7. Hash Tables

  7.1. Introduction – 00:00 min.

  7.2. What are Hash Tables – 00:27 min.

  7.2.1. Benefits & Applications of Hash Tables

  7.2.2. How Hash Tables Work

  7.2.3. Operations Supported by Hash Tables

  7.3. Working with Hash Tables – 03:11 min.

  7.4. Exercise: First Non-repeated Character – 09:18 min.

  7.5. Solution: First Non-repeated Character – 10:13 min.

  7.6. Sets – 17:52 min.

  7.7. Exercise: First Repeated Character – 20:16 min.

  7.8. Exercise: First Repeated Character – 20:48 min.

  7.9. Hash Functions – 23:24 min.

  7.9.1. How HashCode Works

  7.10. Collisions – 29:19 min.

  7.10.1. How to Handle Collisions

  7.11. Chaining – 30:26 min.

  7.12. Linear Probing – 32:06 min.

  7.12.1. Linear Probing

  7.13. Quadratic Probing – 34:48 min.

  7.14. Double Hashing – 36:17 min.

  7.14.1. Review of All the Probing Algorithms Theory

  7.15. Exercise: Building a Hash Table – 39:37 min.

  7.16. Solution: put( ) – 42:14 min.

  7.17. Solution: get( ) – 48:21 min.

  7.18. Solution: remove( ) – 52:51 min.

  7.19. Solution: Refactoring & Automated Testing– 55:22 min.

  7.20. Wrap up – 1:06:26 min.

  7.21. Coming up Next

  7.22. How to Download Tutorial Videos & Other Resources

 

About The Authors & Contributors

  We are software developers. We’ve spent over 17 years as software developers, and have done a bunch of other things too. We've been involved in SDLC/process, machine learning, data science, operating system security and architecture. Our most recent project is serverless computing where we simplify the building and running of distributed systems. We always use a practical approach in our projects and courses.

  Bolakale Aremu & Charles Johnson Jr. Ojula Technology Innovations Web developers and Software Engineers Ojulaweb.com

  0. What You Get

  This playbook is the third volume of the series Introduction to Algorithms & Data It is written in the form of a course. It is a very comprehensive data structures and algorithms book, packed with

  text tutorials with a lot of illustrations 5 hours of HD video tutorials, popular interview questions asked by Google, Microsoft, Amazon and other big companies, practice exercises, codes written during the course and screenshots used in this book.

  Most data structure books and courses are too academic and boring. They have too much math and their codes look ugly, old and disgusting! This book is bundled with tutorial videos that are fun and easy to follow along, and show you how to write beautiful code like a software engineer, not a mathematician.

  Mastering data structures and algorithms is essential to getting your dream job. So, don't waste your time browsing disconnected tutorials or super long, boring courses.

  0.1. Requirements

  In this volume, we use Java to teach the concepts but you can apply these concepts in any programming language. Our focus is on problem-solving, not programming languages and tools.

  All you need to understand the codes are some basic programming skills, which were already taught in the first and second volumes of the series. Alternatively, if you already know variables, loops, and conditional statements, you're good to go.

  0.2. Benefits of Learning Algorithms and Data Structures

  First, it can help you get your dream software engineering job. If you studied Computer Science but never really understood the complex topic of data structures and algorithms, this playbook will help you. If you are a self-taught programmer, with little to no knowledge of this important topic, you too will find this course very helpful.

  If you failed a job interview because you couldn't answer basic data structure and algorithm questions, just study this book and its accompanying videos. Understanding data structures and algorithms is crucial to excel as a software engineer. That’s why companies like Google, Microsoft and Amazon, always include interview questions on data structures and algorithms.

  I will teach you everything you need to know about data structures and algorithms so you can ace your coding interview with confidence. This course is a perfect mix of theory and practice, packed with over 100 popular interview questions.

  Another benefit is that data structures and algorithms will make you think more logically. They can help you design better systems for storing and processing data. They also serve as a tool for optimization and problemsolving.

 

As a result, the concepts of algorithms and data structures are very valuable in any field. For example, you can use them when building a web app or writing software for other devices. You can apply them to machine learning and data analytics, which are two hot areas right now. If you are a hacker, algorithms and data structures are also important for you everywhere.

  Now, whatever your preferred learning style, I've got you covered. If you're a visual learner, you'll love my HD videos, and illustrations throughout this book. If you're a practical learner, you'll love my hands-on lessons and practice exercises so that you can get practical with algorithms and data structures and learn in a hands-on way.

  0.3. How This Course is Structured

  The contents of this course are divided into six parts so you can easily complete it.

  The following linear data structures are covered in this course:

  Big O Notation Arrays Linked Lists Stacks Queues Hash Tables (Dictionaries)

  At the end of many sections of this course, short practice exercises are provided to test your understanding of the topic discussed. Solutions are also provided so you can check how well you have performed in each section. At the end of this book, you will find a link to download all the helpful resources, such as videos, all the codes and screenshots used in the tutorials, and a bunch of practice You can use them for quick references and revision as well. My support link is also provided so that you to contact me any time if you have questions or need further help.

  After you have studied this course, you will understand what data structures are, how they are measured and evaluated, and how they are

used to solve real-life problems. So, everything you need is right here. I really hope you’ll enjoy it. Are you ready? Let's dive in!

1. Introduction to Linear Data Structures

  In this volume, you will learn linear data structures such as arrays, linked lists, stacks, queues, and hash tables. In volume four, we will look at nonlinear data structures such as trees, heaps and graphs. This volume and other volumes in the series are great for computer science students whose lectures failed at explaining these concepts or anyone who is preparing for a job interview.

  A lot of companies, especially big companies like Google, Microsoft, and Amazon, ask data structure and algorithm questions in their interviews to see if you know how to think like a programmer. The materials in this series will change how you think about coding. They teach you how to think like a programmer and design fast and scalable algorithms.

  I'll be using Java in this course because that's a universal understood language, but you can use any language you're familiar with. This is because our focus is on data structures and algorithms, not programming languages. If you're a C# developer, you can get started immediately because Java and C# are very similar. Furthermore, the java compiler I'll be using is an IDE (code editor) that you can download from However, you can use any code editor you are comfortable with. Our focus is on algorithm design, not tooling.

  In the “Getting Started” folder, I included a video that teaches you step by step how to install Java and other necessary tools such as intelliJ for building Java applications.

  Video Part 3 > 1. Getting Started  > 1. Java_and_intelliJ_Installation > Installing Java.mp4 (14:07 min)

  In case Java is completely new to you, the video also explains how to write a simple Java program. Alternatively, you can watch one of the tons of videos available for free on YouTube on how to install Java and intelliJ on your Windows or Mac computer.

  Now, this volume is a bit different from volumes 1 and 2 where you simply read and learn new things. In this volume, you also have to solve problems a lot of problems and that's how you will learn the art of problem solving. All the exercises you see in this volume are popular interview questions. These are the exercises that teach you how to think like a programmer. So, every single one of them is important.

  Some of these exercises might be a bit challenging. Don’t quickly jump to my solution. Do your best to solve the problem on your own. What matters is that you spend time thinking about various ways to solve a problem. This process will activate certain parts of your brain, and that's what matters, not the solution.

  If you can't complete the exercise in a timely manner, that's perfectly fine. Don't get disappointed. You're a student of this course, so you're learning. If you could complete every exercise immediately, you wouldn't be learning this course, right? So, here's the bottom line. This course is going to change how you think about programming. You'll learn how to solve problems and write beautiful code like a professional software engineer.

 

I worked really hard for this series to be the ultimate data structures course in the world, and I hope you think that too. Use the support link at the bottom of this book to let me know your thoughts. I cannot wait to hear what you think. Now let's jump in and get started!

  1.1. What is an Algorithm?

  Simply put, an algorithm is a set of steps or instructions for completing a task. As explained in volume one of this series, an algorithm is a step-bystep set of instructions or a well-defined procedure for solving a specific problem or performing a task. It's a fundamental concept in computer science and mathematics, used to describe the process of solving problems in a systematic and repeatable way. Algorithms can be represented in various forms, such as natural language descriptions, flowcharts, pseudocode, or programming code.

  1.2. What is Data Structure?

  Data structure is the process of organizing and storing the data. A data structure is a way of organizing and storing data in a computer's memory so that it can be efficiently accessed, manipulated, and managed. It provides a framework for organizing and storing data in a structured manner, which enables various operations to be performed on that data efficiently. Data structures play a crucial role in computer science and programming, as they influence the efficiency and effectiveness of algorithms and software systems.

  Data structure is classified into two types:

  1. Linear Data Structure – In this type of data structure, the data elements are arranged in a linear sequential order, but we don’t have to store the elements sequentially. Examples are Array, Linked List, Stacks & Queue.

  2. Non-Linear Data Structure – In this type of data structure, the data elements are arranged in a non-linear order. Examples are Tree and graphs.

  2. The Big O Notation

  Video Part 3 > 2. The Big O Notation (15:38 min)

  2.1. What is Big O? – 00:00 min.

  What is this Big O all about? Well, let's start with the classic definition on Wikipedia.

  Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.

  Ha! That's the reason why a lot of people find Big O scary. But as you will see in this section, the underlying concepts are actually not that hard. We use Big O to describe the performance of an algorithm, and this helps us determine if a given algorithm is scalable or not. This basically tells us if this algorithm is going to scale well as the input grows really large.

  So, just because your code executes quickly on your computer doesn't mean it's going to perform well when you give it a large data set. That's why we use the Big O notation, to describe the performance of an algorithm.

  Now, what does this have to do with data structures? Well, as you will learn in this course, certain operations can be more or less costly depending on what data structure we use. For example, accessing an array element by its index is super fast.

 

  Figure 2.1: Accessing an array’s first element by its index 0

  But arrays have a fixed length, and if you want to constantly add or remove items from them, they have to get resized. This will get costly as the size of our input grows very large.

  So, if that's what we need to do, then we have to use another data structure called a linked These data structures can grow or shrink very quickly, but accessing a linked list element by its index is slow. That's why you need to learn about the Big O notation first before we can talk about various data structures. Also, big companies like Google, Microsoft and Amazon always ask you about Big O. They want to know if you really understand how scalable an algorithm is.

  Finally, knowing Big O will make you a better developer or software engineer. So, let’s go over the next few sections where we're going to look at

various code snippets and use the Big O notation to describe the performance of our algorithms.

  2.2. O(1) – 01:59 min.

  Figure 2.2 shows a code snippet (not complete) which we will use as our first example.

 

  Figure 2.2: An array that runs in a constant amount of time.

  This method takes an array of integers and prints the first item numbers [0] on the console. It doesn't matter how big the array is. We can have an array with 1 or 1 million items. All you're doing here is printing the first item. So, this method has a single operation and takes a constant amount of time to run.

  You don't care about the exact execution time in milliseconds because that can be different from one machine to another or even on the same machine. All we care about is that this method runs in constant time and you represented using the O(1). This is the runtime complexity of this method. So, in this example, the size of our input doesn't matter. This method will always run in constant time or O(1).

 

Now, what if we duplicate line 7 We will then have two operations. Both these operations run in constant time. So, we can say the runtime complexity of this method is O(2). But when talking about the runtime complexity, we don't really care about the number of operations. We just want to know how much an algorithm slows down as the input grows larger.

  So in this example, whether we have 1 or 1 million items, our method runs in constant time. So we can simplify this by writing O(1), meaning constant time. See Figure 2.3.

 

  Figure 2.3: Two operations that run in constant time. Runtime complexity is still  O(1).

  Let's look at another example in the next section.

  2.3. O(n) – 03:28 min.

  What if we have a slightly more complex example? Let’s say we have a loop that iterates over all the items in an array and printing each item on the console, such as the one shown in Figure 2.4. This is where the size of the input matters. If we have a single item in this array, we're going to have one print operation. If we have a million items, obviously we're going to have one million print operations.

  So the cost of this algorithm grows linearly and in direct correlation to the size of the input. So we represent the runtime complexity of this method using the O(n), where n represents the size of the input. So as n grows, the cost of this algorithm also grows linearly. Figure 2.4.

 

  Figure 2.4: A for loop operation that run in constant time. Runtime complexity is O(n).

  Now it doesn't matter what kind of loop we use to iterate over this array. In the example shown in Figure 2.4, we're using a traditional for loop. We could also use a for-int loop to achieve the same result. We could also use a while loop or a do-while loop.

  Now, what if we have a print statement before and after our loop? What do you think is the runtime complexity of this method? Well, you saw before that a single operation runs in constant time. So we had O(1). You also saw that our loop runs in O(n). If we have another single print operation, once again, we will have O(1).

  So, the runtime complexity of this method is O(1 + n + 1), which we can simplify to O(2 + n). However, when using the Big O notation, we drop the constant 2 because they don't really matter. Here's the reason. If our array has 1 million inputs, adding two extra operations doesn't really have a significant impact on the cost of our algorithm. The cost of our algorithm still increases linearly. So we can simplify this by dropping the constant 2. See Figure 2.5.

 

  Figure 2.5: A for loop operation and two single print operations. Runtime complexity is still O(n).

  What matters is that the cost of this algorithm increases linearly and in proportion to the size of our input. If you have 5 items in the input, we're

going to have 5 operations. If we have a million items, we're going to have a million operations.

  Now, what if we had two loops in Figure 2.5? What do you think is the runtime complexity of this method? It's going to be O(n + n) or O(2n). This is another example where we drop the constant (2) because all we need here is an approximation of the cost of this algorithm relative to its input size. So n or 2n still represents a linear growth. So, we have the same result again, which is O(n).

  Now, what if this method had two parameters, an array of numbers and an array of names? What do you think is the runtime complexity here? Well, both of these loops run in O(n), but here's the tricky part. What is n in this case? We're not dealing with one input. We have two inputs, which are numbers and names. So, we need to distinguish between these two inputs.

 

  Figure 2.6: An array of numbers with n input size and an array of names with m input size. Runtime complexity is still O(n).

 

We could use n for the size of the first input and m for the size of the second input. Figure 2.6. So the runtime complexity of this method is going to be O(n + m). Once again, we can simplify this to O(n) because the runtime of this method increases linearly.

  2.4. – 07:13 min.

  In section 2.3, you learned that simple loops run in linear time or O(n). But what if we have nested loops? What is the runtime complexity here? Well, let's find out! In the outer loop, we're iterating over our input array. So, here we have O(n). Now, in each iteration, once again, we're iterating over all the items in this array, so this is another example of O(n). Therefore, the runtime complexity of this method is O(n n) or that is, O of n squared. See Figure 2.7.

 

  Figure 2.7: Two nested loops, each with n input size. Runtime complexity is

  We say this algorithm runs in quadratic As you can see in Figure 2.8, which is a graph of runtime (vertical axis) against input size n (horizontal axis), algorithms that run in get slower than algorithms that run in O(n).

 

  Figure 2.8: Comparison of linear and quadratic runtimes.

  Of course, this depends on the size of the input. If we're dealing with an array of let's say, 50 items, we are not going to see any differences. But as our input grows, larger and larger algorithms that run in get slower and slower.

  Now, what if you had another loop before or after the upper loop? What is the runtime complexity of this method? Well, here we have O of N. So, the runtime complexity of this method is going to be O(n +

  Now, once again, we can simplify this result. This square of n is always greater than the n itself, right? So in this expression, n squared always grows faster than n. Again, we use the Big O notation to understand how much the cost of an algorithm increases. So, all we need is an approximation, not an exact value. So, we can drop n and conclude that this method runs in See Figure 2.9.

 

  Figure 2.9: Three for loops (two nested), each with n input size. Runtime complexity is still

  Let's look at another example. What if we remove the uppermost loop and add instead another nested loop below the second nested loop? The runtime complexity is now See Figure 2.10.

 

  Figure 2.10: Three nested for loops, each with input size n. Runtime complexity is still

  As you can imagine, this algorithm gets far slower than an algorithm with

  2.5. O(log n) – 09:37 min.

  Another growth rate we're going to talk about is the logarithmic growth. We show this with O(log n). Figure 2.11 shows the logarithmic curve. Now compare this with the linear curve. As you can see, the linear curve growth at the same rate, but the logarithmic curve slows down at some point. So an algorithm that runs in logarithmic time is more efficient and more scalable than an algorithm that runs in linear or quadratic time.

 

  Figure 2.11: Comparison of linear and logarithmic runtimes.

  Let's see a real example of this. Let's say we have an array of sorted numbers from 1 to 10, and we want to find the number 10. One way to find the 10 is to iterate over this array using a for loop going forward until we find the 10. This is called the linear search because it runs in linear time. Figure 2.12.

 

  Figure 2.12: An illustration of a linear search.

  In the worst case scenario, if the number we're looking for is at the end of our array, we have to inspect every cell in this array to find the target number. The more items we have, the longer this operation it's going to take. So, the runtime of this algorithm increases linearly and in direct proportion with the size of our array.

  We have another searching algorithm called binary This algorithm runs in logarithmic time. It's much faster than the linear search. Assuming that our array is sorted, we start off by looking at the middle item (5). Is this item smaller or greater than the value we're looking for? It's smaller. So, our target number, in this case 10, must be in the right partition of this array, right? So we don't need to inspect any of the items in the left partition, and with this, we can narrow down our search by half. Figure 2.13.

 

  Figure 2.13: An illustration of a binary search.

  Now, in the right partition, again we look at the middle item. Is it smaller or greater than the target value? It's smaller. So again, we ignore the items on the left and focus on the items on the right. So, in every step, we're essentially narrowing down our search by half.

  With this algorithm, if we have 1 million items in our array, we can find the target item with a maximum of 19 comparisons. We don't have to inspect every item in our array. This is logarithmic time in action!

  We have logarithmic growth in algorithms where we reduce our work by half in every step. You're going to see this a lot in the second part of this series where we talk about trees and graphs. Unfortunately, I cannot show you an example of this in code now because it's a bit too complex. There are a few things we have to talk about before you're ready to see that in code, but trust me, you'll see that in the code in the future and it'll become super easy.

  For now, all I want you to take away is that an algorithm with logarithmic time is more scalable than one with linear time.

  2.6. – 12:16 min.

  The last growth rate we're going to talk about in this section is the exponential which is the opposite of the logarithmic growth. As you can see in Figure 2.14, the logarithmic curve slows down as the input size grows, but exponential curve grows faster and faster.

 

  Figure 2.14: Comparison of exponential and logarithmic runtimes.

  Obviously, an algorithm that runs in exponential time is not scalable at all. It'll become very slow very soon. Again, I cannot show you an example of this in code yet. We'll have to look at it in the future. For now, all you need to understand is that the exponential growth is the opposite of the logarithmic growth.

 

By the way, these growth rates we have talked about so far are not the only growth rates, but these are the ones that you see most of the time. There are some variations of this that we'll look at in the future. For now, just remember the five curves shown in Figure 2.15.

 

  Figure 2.15: Comparison of the 5 runtimes: Exponential, quadratic, linear, logarithmic and constant runtimes (from top to bottom).

  2.7. Space Complexity – 13:07 min.

  You have seen how we can use the Big O notation to describe the runtime complexity of our algorithms. In an ideal world, we want our algorithms to be super fast and scalable and take minimum amount of memory. But unfortunately, that hardly if ever happens. It's like asking for a Ferrari for $10. It just doesn't happen.

  Most of the time we have to do a trade-off between saving time and saving space. There are times where we have more space, so we can use that to optimize an algorithm and make it faster and more scalable. But there are also times where we have limited space, like when we build an app for a small mobile device. In these situations, we have to optimize for the space because scalability is not a big factor. Only one user is going to use our application at that moment, not a million users.

  So, we need a way to talk about how much space an algorithm requires, and that's where use the Big O notation again. Let's look at a few examples.

  Suppose we have a method that takes an array of string[] and prints a “Hi ” message for every name in this array. Now, in this method, we're declaring a loop variable int i which is independent of the size of the input. So, whether our input array has 10 or 1 million items, this method will only allocate some additional memory for this loop variable. So it takes O(1) space.

  Now, what if we declare another string array and call it and initialize it like shown in line 6 of Figure 2.16.

 

  Figure 2.16: Code illustrating space complexity.

  So the length of this array is equal to the length of our input array. So, if our input array has a thousand items, this array will also have a thousand items. What is the space complexity of this method? It's O(n). The more items we have in our input array (line 4), the more space our method (line 6) is going to take. This is in direct proportion to the size of our input array. That's why we have O(n).

  By the way, when we talk about space complexity, we only look at the additional space that we should allocate relative to the size of the input. We always have the input of size n, so we don't count it. We just analyze how much extra space we need to allocate for this algorithm. That's all about space complexity.

  In this course, we'll only focus on runtime complexity because that's a bit more tricky. But from now on, think about the space complexity of your algorithms, especially in situations where you have limited space. See if there are ways to preserve the memory.

  3. Arrays

  Video Part 3 > 3. Arrays (30:11 min)

  3.1. Introduction – 00:00 min.

  In this section, we're going to talk about our very first data structure and one that you're probably familiar with: Arrays. Arrays are built into most programming languages, and we use them to store a list of items sequentially. In this section, first, we're going to look at various strengths and weaknesses of arrays. Then I'm going to show you how to use arrays in Java. Finally, we're going to build an array class from scratch. This is a fantastic exercise for you to get your hands dirty in the code and get prepared for more complex data structures. So do not skip this section even if you know arrays well. Let's jump in and get started!

  3.2. Arrays – 00:45 min.

  Arrays are the simplest data structures. We use them to store a list of items like a list of strings, numbers, objects, and literally anything. These items get stored sequentially in memory. For example, if you allocate an array of 5 integers, these integers get stored in memory like shown in Figure 3.1.

 

  Figure 3.1: An example of an array with memory addresses and runtime complexity of O(1).

  3.2.1. How an Array Stores Items

  Let's say the address of the first item in memory is 100. As you probably know, integers in Java take 4 bytes of memory. So, the second item would be stored at the memory location 104. The third item would be stored at the memory location, 108 and so on. For this very reason, looking up items in an array by their index is super fast. We give our array an index and it'll figure out where exactly in memory it should access.

  Now, what do you think is the runtime complexity of this operation? It's O(1) because the calculation of the memory address is very simple. It doesn't involve any loops or complex logic. So, if you need to store a list of items and access them by their index, arrays are the optimal data structures for you.

  3.2.2. Limitations/Weaknesses of Arrays

  Now, let's look at the limitations or weaknesses of arrays. In Java and many other languages, arrays are static. This means when we allocate them, we should specify their size, and this size cannot change later on. So, we need to know ahead of time how many items we want to store in an array.

  Now what if we don't know? We have to make a guess. If our guess is too large, we'll waste memory because we'll have cells that are never filled. If our guess is too small, our array gets filled quickly. Then to add another item, we'll have to resize the array, which means we should allocate a larger array and then copy all the items in the old array into the new array.

  This operation can be costly. Can you guess the runtime complexity of this operation? Stop and think about it for a second. Here's the answer. Let's say our array has 5 items. Now we want to add the 6th item. You have to allocate a new array and copy all these 5 items into that new array. So the runtime complexity of this operation is O(n), which means the cost of copying these items into the new array increases linearly and in direct proportion to the size of the array.

  Now let's talk about removing an item. Here we have a couple of different scenarios. If you want to remove the last item, that's pretty easy. We can quickly look it up by its index and clear the memory. So here we have O(1), which is our best case scenario. Figure 3.2.

 

  Figure 3.2: Removing the last item of an array. Runtime complexity is O(1).

  But when doing Big O analysis, we should think about the worst case scenario. What is the worst case scenario here? This is when we want to remove an item from the beginning of the array. We have to shift all the items on the right one step to the left to fill in the hole. See Figure 3.3. The more items we have, the more this shifting operation is going to cost.

 

  Figure 3.3: Removing the first item of an array. Runtime complexity is O(n).

  So for the worst case scenario, deletion is an O(n) operation. So, because arrays have a fixed size, in situations where we don't know ahead of time how many items we want to store in them or when we need to add or remove a lot of items from them, they don't perform well. In those cases, we use linked

lists, which we're going to talk about later in the course. Now, let's see a quick demo of arrays in Java.

  3.3. Working with Arrays – 03:53 min.

  In this section, we're going to look at arrays in Java. If you already know arrays, well feel free to skip this section. To declare an array, we'll start with the type of the array. See Figure 1.3.4. Let's say we want to declare an array of integers. So we type The square brackets indicate that this is an array and not just a regular integer. Next, we give our variable a name like We use the new operator to allocate memory for this array.

 

  Figure 3.4: An example of an array declaration in Java.

  Then we repeat the type of the array one more time in line 5 (on the right side of the = sign), but this time, inside the square brackets, we specify the size of this array. Let's say 3.

  Now if we print (run) this on the console with the command in line 6, we get this weird string: What is this? Well, this is a combination of the type of the array (I), followed by an @ sign, and then this value that is generated based

on the address of this object in memory. That is not useful. We want to see the content of this array.

  To do that, we're going to use the arrays class. So, instead of printing numbers in line 6, we're going to use the arrays class. So, wee type This class is declared and as When we press enter, intelliJ imports this class on the top (line 3).

  Now we use the dot operator .toString() called the toString Then pass our numbers array inside the parenthesis, and this method will convert it to a string, and then we'll print that string on the console. Figure 3.5.

 

  Figure 3.5: Code to print the content of an array in Java using the arrays class

  Now if we run the program one more time, we get the result shown in Figure 3.6.

 

  Figure 3.6: Content of the array is displayed on the console

  The result [0, 0, 0] is much better. As you can see, all items in a numeric array are initialized to zero. Now let me show you how to change the values of these items. After we’ve declared our array in line 7, we want to set the first item of the array. We go to insert a new line in line 8 and type Once again, we use square brackets, and here we specify an index. The index of the first item is 0. Let’s set this to say, 10. Similarly, we can set the second item to 20 and the third item to 30. See Figure 3.7.

 

  Figure 3.7: How to set the first three items of the array to [10, 20, 30].

  Now, when we run the program one more time, we get the content of our array as [10, 20, 30]. Beautiful!

  Now, if you know the items that you're going to store in your array ahead of time, there is a shorter and cleaner way to initialize your array. We just delete all the ceremony in lines 8, 9, and 10. Then we modify line 7 by simply using curly braces to declare and initialize this array. Here it is:

  int[] numbers = {10, 20, 30};

  This will allocate an array of three items in memory and it'll initialize each item according to the values we have passed. We get the exact same result as before.

  Now these array objects have a field called length which this returns the size of the array. So we add in line 8:

  System.out.println(numbers.length);

  Let's print this onto console and see what we get. The size of this array is 3 as we expect, and we cannot change it. So, if you want to store 4 items here, you have to create another array, copy all the existing items, and then add the 4th item. This is the problem with arrays in Java. So, if you want to work with lists that grow or shrink automatically, you'll have to use linked lists. We're going to talk about them in the next section, but before we get there, I'm going to give you a fantastic exercise. We'll look at that in the next section.

  3.4. Exercise 1: Building an Array – 07:23 min.

  As you learned in the previous section, arrays in Java are static. This means they have a fixed size and this size cannot be changed. But now we're going to do something really cool. We’ll created a dynamic array. As we add new items to the array, it'll automatically grow, and as we remove items, it'll automatically shrink. Let me show you how that works.

 

  Figure 3.8: An example of a dynamic array.

  Now this numbers object has a method called insert for adding new items. Let's add 10, 20, and then 30 (lines 6 to 8). Let’s ignore or comment out line 9 for a moment. We also have a method for printing this array, but technically this print() method in line 10 shouldn't be there because an array should not be concerned about printing its content on the console. It shouldn't know anything about console. It should only be concerned about storing these values. Displaying these values is a completely different concern and should not be implemented as part of this class.

  But in this course we want to keep things simple. That's why I've implemented the print method inside the array class. Now let's run this program and see what we get. The result is 10, 20, 30 as we expected (displayed vertically on the console). Beautiful!

  Although the initial size of our array is 3, but we can easily add a new item and our array is going to automatically grow. Let’s now add that line 9 we ignored a while ago (or uncomment it) When we run the program now, the result is 10, 20, 30, 40 as we expected (displayed vertically on the console).

  We also have a method for removing items. Let's say we want to remove the last item. What is the index of this item? Well, if the index of the first item is 0, then the index of the last item is 3. So we insert this in line 10:

  Numbers.removeAt(index:3)

  When we run this program the result is 10, 20, 30 as we expected.

  We also have one method for finding the index of an item. Let me show you. I'm going to do a print statement in line 11:

  System.out.println(numbers.indexOf(10));

  This will return the index of the first occurrence of an item, say 10. So because 10 is the first item in this array, this method will return 0. Try it yourself! If we pass a number that doesn't exist in the array, let's say 100, it's going to return -1.

  Now here's your exercise. I want you to build an array class that works exactly like what you saw in this video. This is a fantastic exercise for you to get your hands dirty in the code, especially for working with data structures and algorithms. Don't say, “This is too easy. I already know how to do this.”

  Trust me, as we talk about more complex data structures, our code is going to get more complex. So I want you to treat this as a warmup exercise. So, spend 20 minutes on this exercise. When you're done, come back, see my solution.

  3.5. Solution 1: Building the Array – 10:15 min.

  We're going to solve this problem step by step, and this is the approach I want you to follow: whenever you want to solve a complex problem, don't try to do too many things at once. Try to break down that problem into smaller, easier to understand, easier to solve problems. So in this section, we just want to focus on creating this array class and printing its content on the console.

  We're not going to worry about adding new items or removing existing items. We're going to implement these features in the following sections. So let's add a new public class. But first, you will need to create a new package if you have not so already. To do this, on the left panel of IntelliJ IDE, right click on src and select package. Figure 3.9.

 

  Figure 3.9: How to create a new package.

  In the New Package dialog window, add the name you want to give your package (I call mine and press enter. Then we will create a class named if you

don’t already have it. Figure 1.3.10 shows how to begin to create a new public class in your new package. Just right click on your package. Select New > Java

  Creating a new package when writing Java code in IntelliJ IDEA (or any other integrated development environment) serves several purposes, primarily related to code organization, encapsulation, and avoiding naming conflicts. Here's why creating packages is a common practice:

  1. Code As your project grows, you'll likely have multiple classes and files. Organizing related classes into packages helps maintain a clear directory structure, making it easier to locate and manage your code. This is especially important in larger projects where the number of files can become substantial.

  2. Packages provide a level of access control. Classes in the same package have package-level visibility to each other's package-private (default) members. This encourages encapsulation by allowing you to control which parts of your classes are accessible from outside the package.

  3. Namespace Packages help prevent naming conflicts. If you have two classes with the same name but in different packages, they won't clash. This is particularly useful when working on projects involving multiple developers or third-party libraries.

  4. By organizing classes into packages, you can create modules that have well-defined responsibilities. This modularity improves code maintainability and allows you to focus on specific parts of the application without being overwhelmed by the entire codebase.

 

5. Access Packages provide different levels of access control (public, protected, package-private, private) to classes, methods, and fields. This helps you define the visibility and accessibility of your code components.

  6. Classpath When you compile and run Java code, the classpath is used to locate the compiled classes. Organizing your code into packages ensures that the classpath is structured in a way that makes it easier for the Java runtime to find the required classes.

  Remember that while creating packages is a good practice, it's also important not to overcomplicate your project's structure. Find a balance between organizing your code effectively and avoiding unnecessary complexity.

 

  Figure 3.10: How to create a new package.

  Now, in the New Java Class dialog window, select Class and type Main in the name field. Main is your new public class. Figure 3.11.

 

  Figure 3.11: How to create a new public class.

  A Main public class will be created as shown in Figure 3.12.

 

  Figure 3.12: A new public class called Array is built.

  In Java, classes are used to define blueprints for creating objects. When it comes to access modifiers like public and they control the visibility and accessibility of classes and class members (fields, methods, nested classes) within Java programs.

  3.5.1. Public Class

  A public class is accessible from any other class, whether it's within the same package or a different one. It can be used as a top-level class that's meant to be used by other classes and packages. If you declare a class as public, it can be instantiated, extended, and accessed by any code that has access to the package containing the class. Here’s an example:

  public class PublicClass {

  // class members and methods

  }

  3.5.2. Private Class

  In Java, you cannot declare a top-level class as Top-level classes must be either or However, you can declare a nested class as A private nested class is only accessible within the outer class that contains it. It cannot be accessed or used directly from outside that outer class. Here’s an example:

  public class OuterClass {

  private class PrivateNestedClass {

  // class members and methods

   }

    // Other members and methods of OuterClass

  }

  To sum up:

  A public class can be accessed from anywhere in your codebase.

  A private class (which is typically a nested class) can only be accessed within the outer class that contains it.

 

Now, let’s work with our Main class. Copy the code shown in Figure 3.13 from your “Book Resources” folder Resources > Part 3 > 3. Arrays > Ojulawebcodes > 1. and paste it in your Main class. Alternatively, you can just type the code in manually. When you click the run button at the top to run this code, after a few seconds, you’ll see that a new static array with three items is created, as you can see from the result (output) at the bottom of the window. Figure 3.13.

 

  Figure 3.13: Code that created a new array with three items.

  To make our array dynamic and allow for adding new elements easily, we will use the ArrayList class from the java.util package. ArrayList is a resizable array implementation in Java that provides dynamic sizing and various utility methods for adding, removing, and manipulating elements. The code for a dynamic array is shown in Figure 3.14. Add it to your Main class.

 

  Figure 3.14: A dynamic array with three items.

  In this dynamic version of the code, we've used the ArrayList class instead of a traditional array. The ArrayList dynamically resizes itself as elements are added, making it easy to add new items. We use the add method to add elements to the end of the ArrayList. The size method in line 16 is used to get the current number of elements in the list, and the get method in line 17 is used to retrieve elements by index.

  This approach allows you to easily add new elements (items) without worrying about the fixed size of the array. So, when we run the code in Figure 3.14, we get the same result we got when we ran the code in Figure 3.13. However, if we add a new item to the for example, by adding numbers.add(40); to line 13, we get this result: 10, 20, 30, 40 displayed vertically on the console. So, our array automatically grows.

  3.6. Solution 2: Insert Method – 13:34 min.

  Now let's implement the insert method. We’re not going to use the insert() method already used in the video to insert a new element at the end of the array. To insert an element at the end of the ArrayList, we can simply use the add() method without specifying an index as we already did in Figure 3.14. Instead, we will use the add() method to insert an element at any index we want.

  Figure 3.15 shows how to use the add() method with an index to insert an element (25) at a specific position (second position or index 1) within the

 

  Figure 3.15: Code showing how to use the add() method to insert the item 25 at the second position (index 1) within ArrayList.

  In Figure 3.15, we're using the add() method with two arguments: the index at which we want to insert the new element, and the value of the new element. The newIndex variable specifies the index (in this case, 1) where we want to insert the new value (25). The existing elements will shift to make room for the new element. You can replace newIndex and newValue with the specific index and value you want to use for insertion. In this example, the result is 10, 25, 20, 30 (written vertically at the bottom of the console).

  3.7. Solution 3: Remove Method – 17:54 min.

  Let’s see how we can use the remove() method with an index to delete an element at a specific position within the Figure 3.16 shows the code that demonstrates this.

  In the example in Figure 3.16, we're using the remove() method with the index of the element we want to remove. The indexToRemove variable specifies the index (in this case, 1) of the element we want to remove. After removing the element at that index, the remaining elements will shift to fill the You can replace indexToRemove with the specific index you want to use for removal.

 

  Figure 3.16: Code showing how to use the remove() method to remove the item 20 at the second position (index 1) within ArrayList.

  3.7.1. Validation of Index

  The code in Figure 3.16 does not validate the index we specified, For example, if we specify indexToRemove = 4 in the we will get an unexpected behavior or error like The valid range for indexToRemove in this code is 0 to 3 since there are only 4 items.

  We can add a validation check to ensure that the indexToRemove variable is within the valid range of indices for the Figure 3.17 shows how we can modify the code to include this validation. We will also get a notification if the item at the index we specify is successfully removed, or if it is not.

 

  Figure 3.17: Code showing how to use the remove() method with validation within ArrayList. In the screenshot, 4 is an invalid index.

  In this version of the code, before using the remove() method, we're checking if the indexToRemove is within the valid range of indices for the If the index is valid, the element will be removed, and a success message will be printed. If the index is invalid (such as the index 4 specified in Figure 3.17), a message indicating the invalid index will be printed, and no removal will take place. This validation helps prevent unexpected behavior or errors that could occur if an invalid index is used for removal.

  3.8. Solution 4: Search (IndexOf) Method – 22:45 min.

  In this section we want to implement a search operation using the indexOf() method. An example is already shown in the video. However, in Figure 3.18, we’re using the indexOf method to search for the element 30 in our ArrayList and return its index:

 

  Figure 3.18: Code showing how to use the indexOf() method to search for an item within ArrayList. In the screenshot, 30 is the element we’re searching for.

 

In this example, we're using the indexOf() method to search for the elementToSearch within the ArrayList. If the element is found, indexOf() will return the index of the first occurrence of the element. If the element is not found, it will return We then use a conditional statement to print whether the element was found along with its index or whether it was not found in the list.

  For example, if we enter 30 for the “Element 30 found at index 2” will be printed on the console. But if enter 50 for the “Element 50 not found in the list” will be printed on the console.

  We're done with this implementation. But I want to ask you one question. What is the runtime complexity of this method? Think about it for a few second. Okay, here's the answer. The runtime complexity of the indexOf method in an ArrayList is where n is the number of elements in the list. This is because the indexOf method needs to iterate through the list sequentially, comparing each item to the target item until it finds a match or reaches the end of the list. So, we have to loop over the entire array to find that item.

  If our array has 1 million items, that means we're going to have 1 million comparison operations. As I told you before, when doing Big O analysis, we always consider the worst case scenario. So, in the worst case scenario, the runtime complexity of this method is O(n).

  3.9. Dynamic Arrays – 25:13 min.

  You learned how to build a dynamic array from scratch in the last exercise. That was a great exercise. However, Java has two implementations of dynamic arrays. Let me show you. We have two classes: Vector and Both these classes are declared in the Java.util package, but they're slightly different.

  The Vector class will grow by a 100% of its size every time it gets full, whereas the ArrayList will only grow by 50% of its size. Also, all the methods in the vector class are synchronized. This is an advanced topic and I'm going to cover that in my upcoming advanced Java course. But basically when we say a method is synchronized, that means only a single thread can access that method.

  In other words, if you have a multi-threaded application where multiple threads are going to work with this collection, then you're not going to be able to use the Vector class. You should use the ArrayList class because the methods of the ArrayList are not synchronized. Now let's have a quick tutorial of the ArrayList class.

  The Vector class in Java is part of the Java Collections Framework. It provides a dynamic array-like data structure that can grow or shrink in size as needed. It's similar to the ArrayList class, but with a few differences.

  3.9.1. Overview of the Vector Class

  1. Importing the Vector class:

  Before using the Vector class, you need to import it from the java.util package as follows:

  import java.util.Vector;

  2. Creating a Vector:

  You can create a Vector object using its default constructor or by specifying an initial capacity:

  Vector numbers = new Vector();

  Vector names = new Vector(10); // Specify an initial capacity of 10

  3. Adding Elements:

  You can add elements to a Vector using the add method:

  numbers.add(10);

  numbers.add(20);

  numbers.add(30);

  4. Accessing Elements:

  You can access elements of a Vector using index-based access just like an array:

  int firstNumber = numbers.get(0); // Access the first element

  5. Removing Elements:

  The Vector class provides several methods to remove elements:

  remove(int Removes the element at the specified index. remove(Object o) : Removes the first occurrence of the specified element. removeAllElements() : Removes all elements from the Vector.

  numbers.remove(1); // Removes the element at index 1

  6. Checking Size and Capacity:

  You can check the current size (number of elements) and capacity (maximum number of elements without resizing) of the Vector:

  int size = numbers.size();

 

int capacity = numbers.capacity();

  7. Iterating Through Elements:

  You can use loops or iterators to iterate through the elements of a `Vector`:

  for (int i = 0; i < numbers.size(); i++) {

    int element = numbers.get(i);

  System.out.println(element);

  }

  8. Thread-Safe Operations:

  One key difference between Vector and ArrayList is that Vector methods are synchronized by default, making them thread-safe. This means that multiple threads can safely access and modify a Vector concurrently without causing data corruption. However, this synchronization can impact performance.

  9. Performance Considerations:

  Because of the built-in synchronization, Vector can be slower than ArrayList for single-threaded operations. If you don't require thread safety, consider using ArrayList instead.

  In most modern Java applications, ArrayList is preferred over Vector due to its better performance. However, if you require thread safety, you might consider using other data structures such as Collections.synchronizedList with an ArrayList for better performance control.

  3.10. Wrap up – 29:02 min.

  Let's quickly recap the key points you learned about arrays.

  Arrays are the simplest data structures and are built into most programming languages. In some languages, like JavaScript and Python, arrays are dynamic, which means they can grow or shrink automatically. However, in are Having said that, Java provides a dynamic array implementation using the ArrayList Each time an ArrayList gets full, it'll grow by 50% of its size. Arrays are great when you know ahead of time how many items you want to store in

  3.10.1. Runtime Complexities of Various Operations

  Looking up by index is an O(1) operation, so it's super fast. Looking up by value is an O(n) operation because we have to iterate over all the items to find a given value. In the worst case scenario, this item is going to be the last item in the array. Insertion and deletion are both O(n) operations because the items may have to be copied to a new array or shifted to the left in case of deletion.

  In the next section, we're going to talk about linked lists. Are you ready? Let’s dive in!

  4. Linked Lists

  Video Part 3 > 4. Linked Lists (1:05:10 min)

  4.1. Introduction – 00:00 min.

  Linked lists are probably the most commonly used data structures after arrays. They solve many of the problems with arrays and are also used in building more complex data structures. In this section, we'll talk about how linked lists are structured in memory. We'll look at the time complexity of various operations on them. Finally, we're going to build a linked list from scratch. This is an incredible exercise for you to train your programming brain. Are you ready? Let’s jump in and get started.

  4.2. What Are Linked Lists? – 00:37 min.

  We use linked lists to store a list of objects in sequence, but unlike arrays, linked lists can grow and shrink automatically. As you can see in Figure 4.1, a linked list consists of a group of nodes in sequence. Each node holds two pieces of data. One is a value and the other is the address of the next node and the list.

 

  Figure 4.1: An illustration of a linked list.

  So, we say each node points to or references the next node. That's why we refer to these structures as linked lists. Because these nodes are linked together, we call the first node the head and the last node the

  Now, let's look at the time complexity of various operations. Let's say we want to look up a that is, find out if our list contains a given number. We have to traverse the list starting from the head all the way to the tail. What is the runtime complexity here? It's O(n) because the value that we're looking for may be stored in the last node. That is our worst case scenario, right?

 

What about looking up by Well, unlike arrays where items are stored sequentially, the nodes of a linked list can be all over the place in memory. They may not be exactly next to each other. That's why each node needs to keep a reference to the next node. For this reason, unlike arrays, we cannot quickly look up an item by its index. We have to traverse the list until we find that item. In the worst case scenario, that item can be at the end of the list. So once again, here we have

  What about insertions? Well, it depends where we want to insert an item. If you want to insert a new item at the end, we simply need to create a new node and have the last node, or the tail, point to it. See Figure 1.4.2. We should have a reference to the last node somewhere so we don't have to traverse the list every time. Now, we need to have the tail reference this new node. So inserting a new item at the end is an O(1) operation.

 

  Figure 4.2: Inserting a new value at the end of a linked list.

  What about inserting at the What do you think is the runtime complexity here? Think about it for a few seconds. Here's the answer. It's an O(1)

because, again, we should have a reference to the head or the first node. So, to insert a new item at the beginning of the list, we create a new node, link it to the first node, and then change the head to point to this new node. Figure 4.3. This is very fast. Unlike arrays, we don't have to copy or shift items around. We simply update the links or references.

 

  Figure 4.3: Inserting a new value at the beginning of a linked list.

  Now, what if you want to insert an item somewhere in the middle, say after the 10th node? Well first we have to find that node. That's an O(n) operation. Then we have to update the links, which is an O(1) operation. So, inserting an item in the middle is O(1 + n) which is the same as an O(n)

  Now let's talk about deletions. I want you to think about three scenarios: deleting an item from the beginning, from the end, and from the middle. Draw on a piece of paper how the links should be updated. Also, calculate the runtime complexity for each scenario. This is very important. Make sure to do this little exercise because later on you're going to code all of this. If you don't understand these concepts, you're not going to be able to code them. So,

do the exercise in a few minutes. When you're done, come back, continue reading.

  Here are the answers. Deleting the first item is super fast. We simply set the head to point to the second note. That's an O(1) operation. Now we should also remove the link from the previous head so it doesn't reference the second node anymore. Why? Because if we don't do this, Java's garbage collector thinks this object is still used, so it won't remove it from the memory. That's why we should unlink this object from the second object.

  What about deleting the last This one is a bit tricky. We can easily get the tail, but we need to know the previous node so we can have the tail point to that node. How can we do that? We have to traverse the list from the head all the way to the tail. As soon as we get to the node before the last node, we keep a reference to it as the previous node. Then we'll unlink this node from the last node and finally have the tail point to the previous node. So, the runtime complexity here is O(n) because we have to traverse the list all the way to the end. Please watch the video for a clearer explanation.

  What about deleting from the Again, we have to traverse the list to find out the node as well as its previous node. We should link the previous node to the node after this node and then remove this link. So, this object gets removed from memory by Java's garbage collector. Again, here we have an O(n) operation. Figure 4.4.

 

  Figure 4.4: Inserting a new value at the middle of a linked list.

  Next, we're going to work with linked lists in Java.

  4.3. Working with Linked Lists – 05:10 min.

  In Java, a linked list is a data structure that represents a linear collection of elements. Unlike arrays, which have a fixed size, linked lists can grow or shrink dynamically. As explained earlier, each element in a linked list is represented by a node, and each node contains the element itself and a reference (or link) to the next node in the list. This linked structure allows for efficient insertion and deletion of elements at any position in the list.

  It's important to note that linked lists are particularly useful when you need frequent insertions or deletions in your data structure, as they can efficiently handle these operations compared to arrays.

  Linked lists come in various flavors, each with its own characteristics and use cases. Here are some of the different types of linked lists:

  Singly Linked This is the simplest form of a linked list. Each node has a data element and a reference to the next node in the sequence. It only allows traversal in one direction, from the head to the tail. This is the type of linked list that will be demonstrated in the next section.

  Doubly Linked In a doubly linked list, each node contains two references: one to the next node and another to the previous node in the sequence. This bidirectional linkage allows for more efficient traversal in both directions. It also enables operations like insertion and deletion of nodes before or after a given node more easily. However, it consumes more memory due to the extra reference per node.

  Circular Linked In a circular linked list, the last node's next pointer doesn't point to null but rather to the first node, creating a circular structure. This can be useful for applications that require continuous looping through the list or for creating circular buffers.

  Singly Linked List with Tail In addition to the head pointer, this type of singly linked list also maintains a direct reference to the tail node. This makes appending new elements to the end of the list more efficient, as you don't need to traverse the entire list to find the tail node.

  Skip A skip list is a data structure that allows for fast search within an ordered sequence of elements. It's built using multiple layers of linked lists, with each successive layer containing a subset of the elements from the layer below. This structure enables logarithmic time complexity for searching, making it efficient for large datasets.

  Self-Organizing This type of list reorganizes itself based on the frequency of access to its elements. The idea is to improve the efficiency of frequently accessed elements by moving them closer to the head of the list. Variants include the Move-to-Front list and the Transpose list.

  Unrolled Linked In an unrolled linked list, multiple elements (or nodes) are stored in each node. This reduces memory overhead and can improve cache locality for certain operations. It's useful when the elements themselves are small.

  Multi-Level Linked This is an extension of the singly linked list, where each node can point to multiple nodes in the layer below it. It's used in hierarchical data structures like trees and graphs.

  We’ll look at Singly and doubly linked lists more closely in section 4.10.

  The choice of which type of linked list to use depends on the specific requirements of the problem we're trying to solve. Each type comes with its own trade-offs in terms of memory usage, insertion and deletion efficiency, and traversal capabilities.

  Here's an explanation of how to use a singly linked list in Java:

  How to import the LinkedList class

  First, you need to import the LinkedList class from the java.util package as follows:

  import java.util.LinkedList;

  How to create a LinkedList

  You can create a new instance of a linked list called mylist for integers using this constructor:

  LinkedList mylist = new LinkedList();

  Similarly, you can create a new instance of a linked list called mylist for Strings using this constructor:

  LinkedList mylist = new //new linked list called mylist is created

  How to add elements

  You can add elements like integers and strings to the linked list variable we just created using various methods, such as add(), addFirst(), addLast(), etc. Below are some example:

  1. Using add() with integers:

  mylist.add(10);

  mylist.add(20);

  mylist.add(30);

  2. Using addLast() with integers:

  mylist.addLast(10); mylist.addLast(20); mylist.addLast(30);

  3. Using addLast() and addFirst() with integers:

  list.addLast(10); list.addLast(20); list.addFirst(30); //30 will come first in the list.

  4. Using add() with strings:

  mylist.add(“Apple”);

  mylist.add(“Banana”);

  mylist.add(“Cherry”);

  How to access elements

  You can access elements by their index using the get() method:

  String fruit = myList.get(1); // Retrieves the element at index 1.

  How to remove elements:

  You can remove elements using methods like remove(), removeFirst(), removeLast(), etc. Here's an example:

  linkedList.remove(1); // Removes the element at index 1.

  How to iterate through the LinkedList

  You can iterate through the linked list using a loop or an iterator:

  for (String item : linkedList) {

  System.out.println(item);

 

}

  Other useful methods

  LinkedList provides various methods for manipulating a list, such as size(), isEmpty(), clear(), etc. Here's an example:

  int size = linkedList.size(); // Returns the number of elements in the list.

  boolean isEmpty = linkedList.isEmpty(); // Returns true if the list is empty.

  linkedList.clear(); // Removes all elements from the list.

  Figure 4.5 shows a complete example of the usage of some of the methods explained above. Apple (with index 0) is removed after which Banana (with index 1 and which is given the variable name is appended to the bottom of the list.

 

  Figure 4.5: A linked list is created with elements Apple, Banana and Cherry. Apple is removed from the top and Banana is added again at the end of the list.

  To check the index of the first occurrence of any element of a linked list, you can use indexOf as shown in the example below:

    Result = 0. Run your own code to confirm this.

  To see if the list contain a particular element, such as Banana in the example in Figure 1.4.5, we use contains as shown in the example below:

    Result = true. Run your own code to confirm this.

  What do you think the following lines do?

    Var array = mylist.toArray();

  Watch the video of this section to find out!

  4.4. Exercise 2: Building a Linked List – 08:34 min.

  Just like in section 1.3.4, we're going to build a linked list from scratch. This is a great exercise for you to practice all the materials in this course. But before we get started, I want to give you a couple of hints. Create a new project in a new window by following this path:

  File > New > Project… .

  You don’t need to create a new package this time around because you really don’t need it. See how to create a new project in Figure 4.6.

 

  Figure 4.6: How to create a new project in IntelliJ IDE.

  In the dialog box that pops up, name the project the name you like. I call mine “Exercise 2 - Building a Linked List”.  Then hit the Create button at the bottom. See how to do this in Figure 4.7.

 

  Figure 4.7: How to name a new project.

  Next, another dialog box pops up asking us to choose whether we want to open our project in the existing window or in a new window. See Figure 4.8.

 

 

Figure 4.8: How to name a new project.

  If we choose a new window, the Mains class is automatically created and some sample code is placed in the Mains tab for us. We will delete the sample code and start writing our own. I want you to spend 30 to 40 minutes on this exercise before you look at my solution below. Do not skip this. It’s super important!

  4.5. Solution: Building a Linked List – 09:59 min.

  In the video, the following methods were used: addLast(), addFirst(), indexOf(), contains(), removeFirst() and But here, I use the append method so that you can learn several methods for solving this exercise. Figures 4.9A to 4.9C show my solution to this exercise.

 

  Figure 4.9A: Java code for building a linked list – Part A.

 

  Figure 4.9B: Java code for building a linked list – Part B.

 

 

Figure 4.9C: Java code for building a linked list – Part C.

  In this code, I created a basic singly linked list implementation with the ability to add (append) elements to the end of the list.

  Explanation of the code

  Node class (code line 1 in Figure 4.9A): This class represents a single node in the linked list. Each node contains two properties: data to store the value and store a reference to the next node in the list.

  LinkedList class (code line 11 in Figure 4.9A): This class represents the linked list itself. It has a head property that points to the first node in the list.

  The append method (code line 18 in Figure 4.9B) in the LinkedList class adds a new node with the given data to the end of the list. It first creates a new node, and if the list is empty (head is null), it sets the new node as the head. Otherwise, it traverses the list from the head to the last node and adds the new node there.

  The display method in the LinkedList class (code line 31 in Figure is used to print the elements of the list in order, starting from the head and moving through the next pointers.

  In the main method of the Main class, a new linked list is created. Elements 10, 20, and 30 are appended to the list using the append method. Finally, the contents of the linked list are displayed using the display method.

 

When you run this code, you will see the result 10 20 30 display at the bottom of the IDE window. This code demonstrates the basic structure and operations of a singly linked list in Java.

  4.6. Implementing Size – 30:27 min.

  Our linked list is getting better and better. Now, we’re going to take it to the next level. Let's add a new method for getting the size of a linked list. How are we going to calculate this? Well, one approach is to traverse the list. We start from the beginning, go all the way to the end. Every time we get a node, we increment a counter variable and then we'll return it.

  The problem with this approach is that every time we call the size() method, we have to recalculate the size. This is not efficient. What if our linked list has a million items? We don't want to trigger such a big list every time we call the size method. Here’s a better way.

  This is a more efficient way to calculate the size of a linked list. One common approach to addressing this issue is to maintain the size as an attribute of the linked list and update it whenever we perform an operation that changes the list's size (insertions or deletions). This way, we won't need to traverse the entire list every time we want to know its size. Figures 4.10A to 4.10C shows how I implemented this approach:

 

  Figure 4.10A: Code to calculate the size of a linked list – Part A.

 

 

Figure 4.10B: Code to calculate the size of a linked list – Part B.

 

  Figure 4.10C: Code to calculate the size of a linked list – Part C.

  With this approach, every time we insert a new element into the linked list, we increment the size variable. Similarly, if we implement deletion methods, we decrement the size variable accordingly. This way, we can get the size of the linked list in constant time without having to traverse the entire list each time.

  Remember that this approach requires that we carefully manage the size attribute to keep it in sync with the actual number of elements in the linked list.

  4.7. Converting Linked Lists to Arrays – 34:42 min.

  Let's say we have a method that expects a regular Java array. We cannot pass a linked list to that method. Converting a linked list to a regular Java array involves traversing the linked list and adding its elements to an array. One way to do it is shown in the video but let’s look at another simple example to demonstrate how to do this. Assuming we have the following Node and LinkedList classes in Figure 4.11, similar to the ones we discussed earlier.

 

  Figure 4.11A: Code to convert a linked list to an array – Part A.

  Now, let's add a method to the LinkedList class to convert it into a regular Java array:

 

  Figure 4.11B: Code to convert a linked list to an array – Part B.

 

In this example, the toArray() method creates a new array of the same size as the linked list. It then iterates through the linked list, copying each element to the corresponding index in the array. Finally, it returns the array.

  4.8. Cheat Sheets – 36:53 min.

  In case you didn't know, there are several cheat sheets on the internet where you can see the runtime complexity of various operations on different data structures. But I personally am not a fan of these cheat sheets and I discourage you from using these, especially if you're preparing for a job interview. That’s because when it comes to data structures, memorization doesn't work. You really need to understand the concepts. You need to understand how these data structures work, how do you store values, how you can perform various operations on them, and what is the runtime complexity of these operations. That's why you have been given these exercises and asked questions in the middle of the videos and this book to get you involved and make you think.

  So, if you've been watching the accompanying videos of this course like an entertaining movie, sorry, you have to stop and go back. You should do the exercises before going further because as we go through the course, you're going to learn about other types of data structures. And if you don't understand exactly how they work, you're not going to be able to answer interview questions. That’s because there's so much you have to memorize and that doesn't work.

  So, assuming that you have been actively participating, let's look at the differences between arrays and linked lists next.

  4.9. Arrays vs Linked Lists – 38:05 min.

  1. Differences in Terms of Space

  One of the common interview questions is the differences between arrays and linked lists. You should be able to explain the differences in terms of required memory and time complexity of various operations.

  (i) Static arrays have a fixed size

  Arrays, or more accurately static arrays, have a fixed size, so if you don't know ahead of time how many items we want to store in them, we should either use dynamic arrays or link lists.

  (ii) Dynamic arrays grow by 50 or 100%

  Dynamic arrays grow by 50 or 100% of their size when they get full, and they may end up wasting memory.

  (iii) Linked lists don’t waste memories

  Linked lists on the other hand, take only as much memory as they really need, nothing more. Having said that, linked lists take a bit of extra memory because each node should store the address of the next node in addition to a value.

  (iv) Use arrays if you know the number of items to store

  If you know ahead of time roughly how many items you're going to store, you would better use an array, either static or dynamic, because arrays have a smaller footprint. For example, let's say you're going to store up to 100 items, but you're not sure how many items exactly. In that case, you may use a dynamic array with an initial capacity of 100. In Java, we can set the initial capacity (such as new when creating an array list object.

  2. Differences in Terms of Performance

  The above are the differences in terms of the space, but space is not everything. We should also optimize for performance. Talking about performance, as you know, certain operations are fast in arrays, but slow in linked lists. So, as I always emphasize, you need to understand what problem you're trying to solve. Every problem has different solutions and each solution has certain strengths and weaknesses. There is no such a thing as the perfect solution. You should always do trade-offs, and that's exactly what a good interviewer looks for. They want to see if you really understand a problem, can think of different solutions, explain the pros and cons of each and pick the one that best works for a given scenario.

  3. Differences in Terms of Runtime Complexity

  Now let's compare these data structures in terms of their runtime complexity. Arrays shine when you need to look up an item by an index. In linked lists, we have to traverse until we find the item at a given index. Looking up an item by its value is an O(n) in both these data structures because we have to iterate over all the items until we find the target.

 

Inserting items in an array is an O(n) because if the array gets full, we have to resize it and copy all the existing items to the new array. This is not a big deal in an array of 100 items, but if you're dealing with an array of 1 million items, this is certainly going to be a big issue. Again, there is no such a thing as one size fits all. You need to understand the problem we're trying to solve.

  Inserting an item in a linked list can be either an O(1) or O(n), depending on where we want to add this new item. Adding at the beginning or at the end is super fast because we simply have to change the links. But adding at a given index involves a lookup by index, which is an O(n) operation.

  Finally, deleting items in an array is an O(n) operation because we have to shift items to the left to fill the hole. With linked lists, if you want to delete an item from the beginning, we simply change the links; but if you want to remove it from the middle or the end, we have to traverse the list to find the previous node. Figure 4.12 show the summary of these data structures in terms of their runtime complexity.

 

  Figure 4.12: Summary of the differences in the runtime complexity of arrays and linked lists.

  In the next section, I'll show you how we can optimize this using a different type of linked lists.

  4.10. Types of Linked Lists – 41:26 min.

  We have various types of linked lists (see section but the two most commonly used types are singly and Singly linked lists are what you’ve seen so far. Every node has a reference or a pointer to the next node. In doubly linked lists each node also has a reference to its previous node. Figure 4.13.

 

  Figure 4.13: Comparison of singly linked list and doubly linked list.

  Now what is the benefit of this? Well, you learned that removing an item from the end of a linked list is an O(n) operation because we have to traverse the list to find the node before the last node. Doubly linked lists solve this problem. Since each node has a reference to its previous node, we can simply get the last node and from there we can navigate to the previous node. So, this will be an O(1) operation. It's awesome.

  However, every feature comes with a cost. What is the cost of a doubly linked list? Well, each node should have an extra field to hold the address of the

previous node, right? So doubly linked lists take more space than singly linked lists. But this can be negligible for the performance gain we get when removing an item from the end. That's why the linked list class in Java is an implementation of doubly linked list.

  Now, both singly and doubly linked lists can be circular, which means the last node will reference the first node. So we get a circle. Figure 4.14. What is the benefit of this? We can use these lists when we need a circle. For example, imagine you want to build a music player. You give it a playlist and it plays each song in the list. After it reaches the last song, it'll start over. That's where we have a circle.

 

  Figure 4.14: A circular linked list.

  Let me give you another example. Back in the days when I was studying for my bachelor of software engineering, I built a graphical user interface framework with C++. So, I designed a text-based language where you could describe what the user interface of your application looks like. My framework

would then read the text file and construct form objects. On top of these forms, we could have text boxes, dropdown lists, check boxes, and so on. Now, as we pressed tab, the focus would move from one input box to another. I put those input boxes in a circular linked list. So after we pressed tab on the last input box, the focus would move to the first input box on the form.

  Now let me tell you something interesting. I did this for my C project that I took in the second semester. For this project I used C++ instead of C, which was not taught at our university. I used data structures which were taught in year three, and I wrote some parts of my framework using the assembly language, which was taught in the second year. Guess what the grade was that I got out of 20? It’s 16! Interestingly, there was a girl in the class who got 19, and I actually helped her do her Pascal project, which was the basics of programming in semester one. Basically, I did the project for her.

  Now, here's the interesting part. The girl who couldn't do her Pascal project herself got 19 in C programming and I got 16. That's how universities are. That's how they waste your time and money. Your transcript or degree is never a reflection of your knowledge and capabilities. I know we got distracted from the course materials, but I thought it would give us a short break in the course. Thank you for listening to my story. Now let's move on to the next section.

  4.11. Exercise 3: Reversing a Linked List – 44:41 min.

  One of the popular interview questions is reversing a linked list in place. Let me show you what I mean. If we have a linked list like this [10 20 30] then to reverse it, we should change the direction of the links, like this: [10 20 30]. So, now 30 will become the head or the first node, and 10 will become the tail or the last node.

  Now I have to tell you, this is a tricky question. You're not going to be able to answer it within five or ten minutes. If it's your first time working with this kind of problems, it's perfectly fine. It could take you an hour because you're taking this course to learn and grow, you're not ready to solve this problem on the spot. Otherwise, you wouldn't be taking this course, right?

  So, sit down and think about this for as long as you can and come up with a solution, even if it takes you one day to solve this problem, because the moment you solve this problem, you have gone to the next step. If you simply watch or read my solution, you're not going to grow. You're not going to train your programming brain. So go ahead, solve this problem. When you're done, move to the next section to see my solution.

  4.12. Solution: Reversing a Linked List – 46:14 min.

  After you’ve watched how to reverse a linked list in the video, here’s another approach we can use to solve the problem. Imagine our list looks like this: this [10 20 30]. To reverse this in place, we should start from the beginning, grab the first two numbers and change the direction of the link in between these two nodes.

  Figures 4.15A to 4.15D shows the complete code I provide to reverse the linked list in place. In this approach, we modify the pointers of the existing nodes to reverse the order of the list. The Figures show the combined code with the and Main classes, including the reverse() method to reverse the linked list in place:

 

  Figures 4.15A: Code to reverse a linked list – Part A.

 

  Figures 4.15B: Code to reverse a linked list – Part B.

 

  Figures 4.15C: Code to reverse a linked list – Part C.

 

  Figure 4.15D: Code to reverse a linked list – Part D.

  The code creates a linked list, inserts the three elements 10, 20, and 30, reverse the linked list using the reverse() method, and then print both the original and reversed linked lists.

  The reverse() method modifies the pointers of the nodes to reverse the order of the list in place. After reversing, the head of the linked list is updated to the last node, effectively reversing the entire list.

  Node Class

  The Node class represents a single node in the linked list. Each node contains an integer value and a reference to the next node in the list

  LinkedList Class

  The LinkedList class manages the linked list operations. It contains methods to insert elements into the list, reverse the list in place, and print the elements.

  Main Class

  The Main class contains the main method, where you create an instance of the LinkedList class, insert elements, reverse the linked list, and print both the original and reversed lists.

  This code demonstrates how to create a linked list, insert elements, reverse the linked list in place, and print the original and reversed lists. The reverse()

method changes the pointers of the nodes to reverse the order of the list efficiently without requiring additional space.

  4.13. Exercise 4: Kth Node from the End – 55:15 min.

  Let's talk about another popular interview question. Find the Kth node from the end of a linked list in one pass. For example, if you have a linked list with these five nodes: [10 → 20 → 30 → 40 → 50], and K = 1, we should return 50. If K = 2, we should return 40, and if K =3, we should return 30.

  Now, whenever you get a question like this, always try to simplify it. So instead of trying to find the Kth node from the end, try to find, let's say, the third node from the end. This is much easier. Once you solve this specific problem, then you can generalize your algorithm.

  Now, here's the tricky part of this question. We have to find the target node in one pass. In other words, we cannot iterate over all these nodes (10 → 20 → 30 → 40 → 50). Go to the end count the number of nodes we have, and then start over, go forward until we find the target node. We have to do this in one pass, but I have a trick for you.

  A lot of linked list problems can be solved using two pointers. So, to find the Kth node from the end of a linked list in one pass, we can use the "two-pointer" technique. This involves using two pointers that maintain a constant distance between them as they traverse the list. An explanation of how to do this was shown in the video, but here's another step-by-step explanation of how to do it:

 

Given a linked list: [10 → 20 → 30 → 40 → 50] and let's say we want to find the 2nd node from the end (K = 2).

  1. Initialize two pointers, let's call them "first" and "second", and both point to the head of the linked list.

  2. Move the "first" pointer K steps ahead, where K is the position of the node you want to find from the end. In this case, move the "first" pointer 2 steps ahead.

  3. Now, start moving both the "first" and "second" pointers one step at a time until the "first" pointer reaches the end of the list (i.e., becomes NULL). While doing this, the "second" pointer will maintain a K-distance from the "first" pointer.

  4. When the "first" pointer reaches the end, the "second" pointer will be pointing to the Kth node from the end.

  Let's walk through this process step by step:

  1. Initially: [first] → 10, [second] → 10

  2. After moving "first" 2 steps ahead: [first] → 30, [second] → 10

  3. Move both pointers until "first" reaches the end:

  [first] → 50, [second] → 30

 

[first] (NULL), [second] → 40

    At this point, the "second" pointer is pointing to the 2nd node from the end, which has the value 40. Spend about 20 minutes to write a code to implement this, and then move on to the next section to see my solution.

  4.14. Solution: Kth Node from the End – 58:35 min.

  Figures 4.16A to 4.16C show a Java code snippet that implements this algorithm.

 

  Figure 4.16A: Code to find the Kth node from the end of a linked list in one pass – Part A.

 

  Figure 4.16B: Code to find the Kth node from the end of a linked list in one pass – Part B.

 

  Figure 4.16C: Code to find the Kth node from the end of a linked list in one pass – Part C.

 

Try this code. It will output "Kth node from the end: 40". So, this is how you can find the Kth node from the end of a linked list in one pass.

  4.15. Wrap up – 1:03:57 min.

  Here’s a quick recap of all the key points you learned about linked lists.

  Second most used data structures Grow and shrink automatically Take a bit more memory

  Linked lists are the second most commonly used data structures after arrays. Unlike array, they automatically grow and shrink without wasting memory, but they require a little bit more memory because each note should have a reference to the next and or previous notes.

  Figure 4.17 is the summary of the runtime complexities of various operations of linked lists..

 

  Figures 4.17: Summary of the runtime complexities of various operations of linked lists.

  Looking up by index or value are both O(n) operations because they involve traversing the list. Inserting an item at the beginning or at the end is an O(1) operation because we simply have to change the links. But inserting it in the middle involves a lookup, which is an O(n) operation.

  Deleting an item from the beginning is an O(1) operation. Deleting it from the middle is an O(n) operation because it involves a lookup. But deleting it from the end is an O(n) in singly linked lists and O(1) in doubly linked lists.

  This brings us to the end of this section. I hope you've been enjoying the course so far. I look forward to seeing you in the next section.

  5. Stacks

  5.1. Introduction – 00:00 min.

  In this section, we're going to talk about stacks. We're going to look at the structure of stacks, and then talk about the runtime complexity of various operations. Next, we're going to see how we can use stacks to solve real world problems. Finally, we're going to build a stack from scratch. I'm super excited about this section and I hope you are too. So, let's jump in and get started.

  5.2. What are Stacks? – 00:32 min.

  Stacks are powerful data structures that can help us solve many complex programming problems. For example, we can use them to implement the undo feature in our applications. We can also use them to build compilers. Compilers use stacks for parsing the syntax of expressions. We can also use stacks to evaluate arithmetic expressions.

  Stacks have various practical uses across computer science and programming. Here are some common use cases for stacks:

  1. Function Call Stacks are often used in programming languages to manage function calls and recursion. When a function is called, its local variables and context are pushed onto the stack. When the function returns, this context is popped, allowing the program to resume from where it left off.

  2. Expression Stacks are used for evaluating expressions, such as arithmetic expressions or expressions involving parentheses. They help in converting infix expressions to postfix or prefix notation and then evaluating them efficiently.

  3. Undo/Redo Stacks can be used to implement undo and redo functionality in applications. The current state or action is pushed onto the stack, and undoing involves popping the latest action from the stack.

 

4. Parsing and Syntax Stacks play a crucial role in parsing and syntax checking for programming languages. They can be used to ensure that opening and closing brackets, parentheses, and other symbols are balanced and correctly nested.

  5. Backtracking Stacks are useful in backtracking algorithms such as depth-first search (DFS) for traversing graphs and trees. The stack maintains the path taken so far, enabling the algorithm to backtrack when needed.

  6. Memory Stacks are used in memory management, especially in lowlevel programming languages. The call stack keeps track of memory allocated for functions and their variables, allowing for efficient memory deallocation when functions return.

  7. Expression Converting between different forms of expressions, such as infix to postfix or prefix notation, often involves using stacks to manipulate and reorder operators and operands.

  8. Browser Stacks can be used to implement the back and forward navigation in web browsers. Each visited page is pushed onto the stack, allowing users to navigate backward and forward through their browsing history.

  9. Undo Buffers in Text Stacks can be used to implement undo buffers in text editors. Each change made to the text is pushed onto the stack, allowing users to undo changes step by step.

 

10. Postfix Expression Stacks are commonly used to evaluate postfix expressions efficiently. Numbers are pushed onto the stack, and when an operator is encountered, the required number of operands are popped, the operation is performed, and the result is pushed back.

  11. Resource Stacks can be used to manage limited resources, such as in resource allocation algorithms. The stack keeps track of available resources, and when a resource is allocated, it is popped from the stack.

  These are just a few examples of the many ways stacks are used in computer science and programming. Their simple yet powerful nature makes them an essential tool for solving a wide range of problems.

  5.2.1. Structure of Stacks

  Now let's look at the structure of stacks. The best way to understand stacks is to think of a stack of books. We can stack a bunch of books on top of each other, but we can only view or remove the top book. So, stacks are fundamental data structures that follow the Last-In-First-Out principle, meaning that the last element added to the stack is the first one to be removed.

  Imagine each object in a stack represents an action that a user performed in a text editor. So we have action one, action two, and action three. Now to undo these actions, we start with the last action. So, the last action that was performed is the first one that can be undone. We can take the last object out of our stack and call one of its methods to undo that action. Now, internally we use an array or a linked list to store the objects in a stack. So, a stack is basically a wrapper around an array or a linked list that gives us a different way of storing and accessing objects.

  5.2.2. Operations that Stacks Support

  Now, let's look at various operations that stacks support and their runtime complexities. Stacks have four operations:

  1. push(item) – which adds an item on top of the stack,

  2. pop() – which removes the item on the top,

  3. peek() – which returns the item on the top without removing it from the stack, and

  4. isEmpty() – which tells us if the stack is empty or not.

  As you can see, we don't have lookups because stacks are not really meant for that. So, we don't use stacks to store a list of products, customers and so on.

  Now, in terms of the runtime complexity, peek() and isEmpty() run in constant time or O(1). Push() and pop() also run in O(1) because we can quickly insert or remove an item at the end of a linked list. The same is true if we use an array to implement a stack. You'll see that later in the section. So, all operations of stacks run in O(1). That's all about stacks. Now let's jump in and see the stack class in Java.

  5.3. Working with Stacks – 03:19 min.

  In this section, we're going to look at the stack class in Java. This class is generic, so we can store any type of objects in this stack. We can store integers, characters strings, custom objects and so on. Also, this class is declared in Java.util package just like the other data structures you have seen so far. In the video, a stack of integers was created and the above-mentioned standard operations were demonstrated.

  Figures 5.1A to 5.1B shows another simple Java code that demonstrates standard operations performed on a stack that stores integers. For this example, a new class called StackDemo.java is first created before writing the code.

 

 

Figure 5.1A: Code that demonstrates standard operations performed on a stack that stores integers – Part A.

 

  Figure 5.1B: Code that demonstrates standard operations performed on a stack that stores integers – Part B.

  When you run this code, it will output:

  Top element: 30

  Popped element 1: 30

  Popped element 2: 20

  Is stack empty? false

  Remaining elements in the stack:

  50

  40

  10

  This code in Figure 5.1 demonstrates the basic operations we can perform on a stack in Java, including pushing elements onto the stack, peeking at the top element, popping elements from the stack, checking if the stack is empty, and iterating through the stack's elements.

  In Java, this stack class also has a search() method. This is not common because, as I told you earlier, stacks are really not meant for storing a list of objects and looking them up quickly. So, you're almost never going to use this method. I personally have never searched for an item in a stack. So, this is all about stacks. Next, we're going to look at one of the common interview questions around stacks.

  5.4. Exercise 5: Reversing a String with Stack – 05:40 min.

  One of the common interview questions is reversing a string. So they give you a string like this: abcd. Then they ask you to write a code to reverse the string. You can easily solve this problem using a stack. Basically, whenever you're dealing with a problem that involves going back or doing something in reverse order, stacks are your best friends. That's why we use them to implement the undo feature or the back button of our browsers. That was enough hint for you.

  Now, I want you to spend about 15 minutes on this exercise. When you're done, go to the next section to see my solution.

  5.5. Solution: Reversing a String with Stack – 06:21 min.

  We’re going to start by adding a new class to our project. Let's call this A good solution was presented in the video. Figure 5.2 shows another (similar) solution.

 

  Figure 5.2: Code that demonstrates how to reverse the string “abcd”.

  In this code, the StringReverser class contains a reverseString method that takes an input string, pushes each character onto the stack, and then pops the characters from the stack to form the reversed string. The main method demonstrates how to use the reverseString method to reverse the string "abcd".

  When you run this code, it will output:

  Original string: abcd

  Reversed string: dcba

  When you’re done studying this solution, move on to the next section to look at another example of using stacks.

  5.6. Exercise 6: Balanced Expressions – 11:22 min.

  Here's another common interview question. They give you a string and ask you to write code to examine whether the pairs and the orders of brackets (parenthesis) are correct in this string. Let me show you. Supposing we declare a string and set it to an expression like this: “(1 + In this expression, we have an opening (left) bracket followed by a closing (right) bracket. This is what we call a balanced

  Now, what if we didn't terminate the string with the closing bracket? Then the string is not balanced. So, your code would examine the string and tell whether the string is balanced or not. Let's look a few more examples of balanced and unbalanced

  “(1 + 2>” (unbalanced)

  “)1 + 2(”  (unbalanced)

  “(([1] + ))[a]” (balanced)

  As you can see in the last example, we are not necessarily working with arithmetic expressions. We are working with an expression that includes a bunch of characters and brackets. Now, this could be a new programming language that you're going to design in the future. So with stacks, we can check the syntax of this expression and see if this expression is balanced or not. Now how are we going to solve this problem?

 

Well, we need to iterate over the string. We get each character at a time. When we reach a closing bracket, we need to look at the previous opening bracket and make sure they match. So we need to go backward. Now, what kind of data structure can we use to solve this problem? A stack! Because with stacks, we can go backward.

  Here's the algorithm. We iterate over the string. Get each character at a time. If it's an opening or left bracket (it doesn't matter which type: we could have a square bracket or angle bracket), we need to push it onto the top of our stack. If we get a regular character like 1 or space  or +, we ignore it.

  When we get to a closing bracket, we need to pop the character on top of the stack and compare it with the current character. If they match, awesome! We go forward and keep repeating this until we get to the end of the string. But if they don't match, for example, if we had an angle bracket on the left of 1 (like this we immediately stop and return false from our method. That's the basic idea.

  Now, I want you to spend 20 to 30 minutes on this exercise. When you're done, move to the next section to see my solution.

  5.7. Solution 1: A Basic Implementation – 14:17 min.

  Now let's see how we can solve this problem. As I told you before, whenever you want to solve a problem, always try to break it down into smaller, easier to understand and easier to solve problems. In this case, we don't want to worry about different types of brackets. That's way too complex. So, let's narrow down the scope of this problem and only focus on parenthesis.

  So, we can have an expression like this: “(1 + Now let’s write code to see if this expression is balanced or not. When we come up with a working solution, then we'll expand it to support other types of brackets. With that, let's review our algorithm one more time.

  We're going to iterate over this string, get each character at a time. If this character is an opening bracket, we're going to push it onto the top of our stack. If it's a regular character, we're going to ignore it. If it's a closing bracket, we're going to pop the item on top of the stack. Now at the end, if our stack is empty, that means for every left bracket we had a right bracket. So our expression is balanced. But if there is still something in the stack, that means we had an opening bracket but we didn't close it.

  For example, if our expression looks like this “(1 + we push the opening bracket onto the top of our stack. But at the end, if our stack is not empty, that means we didn't have the closing bracket in the expression. So, let's go ahead and implement the solution.

 

First, we're going to add a new class called Expression.java to our project. Figure 5.3 shows my complete solution.

 

  Figure 5.3: Code demonstrating the string “(1 + 2)” is a balanced expression.

  This code defines the Expression class with a static method isBalanced() to check whether parentheses in the expression are balanced or not. The main() method demonstrates the usage by checking the balance of the given expression "(1 + 2)". The output of this code is:

  Expression is balanced.

  Substitute other similar expressions like “)1 + 2(” in line 23 and check out the output of your code. You can also test edge cases like

  ( - only one opening bracket

  (( ) – two opening brackets but one closing bracket.

  Now that everything is working, we're done with the first step. Next, we'll add support for other types of brackets.

  5.8. Solution 2: Supporting Multiple Brackets – 19:35 min.

  The implementation shown in the video was modified to add support for other types of brackets like a curly brace “{“ to our solution. However, the code shown in Figure 5.3 is smart enough to support other types of brackets, so we do not need to modify it. In the video, a new edge case was shown - a situation where we had a left bracket “(” but we closed it with the wrong type of bracket like an angle bracket “>” or a square bracket “]“.

  5.9. Solution 3: First Refactoring – 23:11 min.

  In this video, the code written in section 5.8 was refactored and converted into a beautiful clean code. Again, the code shown in Figure 5.3 is smart enough to support other types of brackets, so we do not need to refactor it. The next video shows you how to make this code even more clean and beautiful.

  5.10. Solution 4: Second Refactoring – 27:20 min.

  Another thing we refactored in the code shown in this video is the long ugly boolean expression in line 24. We really don't like the three logical ORs in it. They’re ugly. So what did we do? Well, just watch the video to learn how to refactor the code. Once again, the code already shown in Figure 5.3 is smart enough to support other types of brackets, so we do not need to refactor it.

  5.11. Exercise 7: Implementing a Stack from Scratch – 33:12 min.

  Now that you have a good understanding of stacks, let's see how we can implement a stack from scratch. There are basically two ways to do this. We can use an array or a linked list to store the items in a stack. In this section, I'm only going to show you the implementation with an array and leave the other implementation to you as an additional exercise.

  Here's what I want you to do. I want you to create a stack class with the standard operations such as push, pop, peak and Use a regular integer array, int[] and not an array list, because I want you to handle the scenario where the stack gets full. Create two classes called Main and Stack for your code. Spend about 15 minutes on this exercise. When you're done, move to the next section to see my solution.

  5.12. Solution: Implementing a Stack from Scratch – 33:59 min.

  Now, let's see how we can implement a stack from scratch using an array. Let's imagine we have an array which initially has five elements that are all zeros: [0, 0, 0, 0, 0]. Figures 5.4A to 5.4B show the code for the Stack.java class and Figure 5.4C shows the code for the Main class.

 

  Figure 5.4A: Code for the Stack.java class for implementing a stack – Part A.

 

  Figure 5.4B: Code for the Stack.java class for implementing a stack – Part B.

 

  Figure 5.4C: Code for the Main.java class for implementing a stack – Part C.

 

Figures 5.4A and 5.4B show a Stack class with methods for pushing, popping, peeking, and checking the stack's state. the code for the Stack.java class and Figure 5.4C shows the code that demonstrates the usage of the implemented stack with the initial array [0, 0, 0, 0, 0].

  In the code shown in Figure 5.4C, two items are popped out. So, when you run the output of the code reads:

  Content of Stack: 10 20 30

  Peek: 30

  Content of Stack: 10

  Peek after two pops: 10

  Is stack empty? false

  Is stack full? False

  If you add one more pop statement in line 17, and change line 19 to

  after three pops: " + stack.peek());

  the output will now read:

  Content of Stack: 10 20 30

  Peek: 30

  Content of Stack:

  Stack is empty. Cannot peek.

  Peek after three pops: -1

  Is stack empty? true

  Is stack full? False

  This brings us to the end of the section on how to implement a stack from scatch using an array.

  5.13. Wrap up – 42:17 min.

  5.13.1. Key Points About Stacks

  Last-In First-Out (LIFO) behavior. Can be implemented using arrays or linked lists. All operations run in O(1).

  In this section, you learned that stacks are data structures with the Last-In First-Out (LIFO) behavior. For this reason, we can use them in situations where you want to undo or do things in the reversed order. You also learn that and that all operations in a stack run in constant time or O(1).

  In the next section we're going to talk about queues. I'll see you there.

  6. Queues

  Video Part 3 > 6. Queues (52:56 min)

  In this section we're going to talk about queues. Queues have a lot of applications in the real world. They're used by printers, operating systems, web servers - basically anywhere you want to process jobs based on the order we receive them. Let's jump in and see what queues are and how we can use them to solve real problems.

  6.1. Introduction – 00:27 min.

  A queue is a data structure similar to a stack except that in queue, the first item inserted is the first one that can be removed. This is what we call FIFO (First-In, First-Out). Figure 6.1.

 

  Figure 6.1: An illustration of a queue.

  Stacks on the other hand are LIFO data structures (Last-In, First-Out). A queue in programming is like a queue or line of people in the real world. People join the line from the back and leave from the front. Queues work exactly the same. So, think of a queue like a line of people waiting in front of a ticket counter or a queue of tasks waiting to be executed by a processor. Now, let's look at a few applications of using queues.

  6.1.1. Applications of Queues

  We can use queues in situations where we have a resource, and this resource must be shared amongst many consumers. These consumers should line up and use the resource one by one. Here are a few examples.

  1. Printers use queues to manage jobs. It prints these jobs in the order they're submitted.

  2. Operating systems use queues to manage processes. These processes wait in a queue for their turn to run. This is called task scheduling. Task scheduling is a key function of an operating system's kernel, where it decides which processes get to use the CPU and for how long. Queues play a crucial role in achieving this by organizing processes in a manner that ensures fairness, efficiency, and responsiveness. So, computers use queues to manage tasks in the order they are received, ensuring that tasks are processed in a fair and orderly manner.

  3. Web servers also use queues to manage incoming requests. These requests go in a queue and get processed in the order they're received.

  4. Live support This is another application of queues. For example, when you have a problem with your web host, you may jump on their website and use their live support. Now, there is someone sitting there responding to people's requests. He or she cannot service everyone at the same time, right? So you have to go in a queue. So, we have a resource with many

consumers. These consumers get access to the resource based on their position in the queue. These are all examples of using queues.

  Queues have various other applications in computer science such as:

  5. Breadth-First Search: Queues are used in graph algorithms like Breadth-First Search (BFS) to explore nodes level by level.

  6. Buffering: Queues are used to hold data in scenarios where producers and consumers work at different rates, like in data streaming applications.

  6.1.2. Common Methods for Implementing Queues:

  1. Array-based Queue: In this implementation, a fixed-size array is used to store the elements. Enqueueing involves adding an element to the rear of the array, and dequeuing involves removing an element from the front. As elements are dequeued, the front index is incremented.

  2. Linked List Queue: In this implementation, a linked list is used, where each element has a reference to the next element. Enqueueing involves adding a new node to the rear of the list, and dequeuing involves removing the node at the front.

  6.1.3. Operations and Runtime Complexities of Queues

  Now let's look at various operations that queues support and the runtime complexities. These operations help you interact with and manage queues effectively.

  1. Enqueue - Adding an element to the back (also called the rear) of the queue is known as enqueueing. This operation is responsible for inserting new items into the queue.

  2. Dequeue - Removing an element from the front (also called the front) of the queue is called dequeuing. This operation retrieves and removes the oldest item from the queue.

  3. Peek - Getting an item at the front without removing it. In other words, it allows you to examine the next element that will be dequeued without actually dequeuing it. This operation is useful when you want to check the value of the front element without altering the state of the queue.

  4. isEmpty - This operation checks whether the queue is empty or not. It returns a Boolean value (true or false) indicating whether there are any elements in the queue. If the queue is empty, it will return true; otherwise, it will return false. This operation is often used to prevent dequeuing from an empty queue, which could lead to errors. A queue is considered empty when there are no elements in it.

 

5. isFull - This operation checks whether the queue is full or not, particularly in cases where the queue has a maximum capacity. It returns a Boolean value indicating whether the queue has reached its capacity and cannot accommodate more elements. In some implementations of queues with fixed sizes, you need to use this operation to determine if you can enqueue more items without exceeding the queue's capacity. However, some implementations can dynamically resize to accommodate more elements.

  Similar to stacks, all these operations running constant time or O(1) because items are added or removed from the ends, and this is very fast. Next, we're going to look at queue implementations in Java.

  6.2. Working with Queues – 02:31 min.

  In this section. we're going to talk about queue implementations in Java. First, the Queue type is an not a class. Interfaces don't have code like classes, so we cannot instantiate them.

  In object-oriented programming, particularly in languages like Java, an interface is a programming construct that defines a contract or a set of methods that a class must implement. It serves as a blueprint for the methods that any class implementing the interface must provide. The key distinction between an interface and a class is that an interface defines what methods a class should have, but it doesn't provide the actual implementation for those methods. So, in many programming languages, including Java, the concept of a queue is defined as an interface rather than a concrete class.

  Let's take a closer look at this:

  Interface: An interface in Java (and similar languages) is a collection of method signatures without method bodies. It defines the structure that classes must adhere to when they implement the interface. In the context of a queue, an interface might specify methods like enqueue(), dequeue(), peek(), isEmpty(), and so on, without actually providing the implementation details.

  Class Implementing the Interface: When a class implements an interface, it must provide concrete implementations for all the methods defined in the interface. For instance, a class that implements a queue interface would

define how the enqueue, dequeue, peek, and other methods are implemented based on its internal data structure.

  Polymorphism and Abstraction: Interfaces provide a way to achieve polymorphism and abstraction in object-oriented programming. By programming to an interface rather than a specific class, you can write code that's more flexible and easily adaptable to different implementations of the same interface.

  In Java, the Queue interface is part of the Java Collections Framework and is used to represent the behavior of a queue data structure. Various classes in Java, such as ArrayDeque and PriorityQueue, implement the Queue interface to provide different implementations of a queue. This allows you to write code that works with queues in a general way, regardless of the specific underlying implementation.

  ArrayDeque is a data structure implementation in Java that provides a doubleended queue (deque) using a resizable It stands for "double-ended queue." The key feature of ArrayDeque is its ability to efficiently add and remove elements from both ends of the deque, making it suitable for a wide range of scenarios.

  Now, there is so much we can talk about when it comes to interfaces. If you don't have a good and in-depth understanding of interfaces, I highly encourage you to learn it. Interfaces are very powerful, yet a lot of people don't really understand them properly because there's a lot of bad and misleading information out there.

  Here’s the Oracle’s documentation where you can have a quick look at the methods that the Queue interface or contract declares:

  Figure 6.2 shows a sample code on how to use ArrayDeque to add elements to a queue and then remove the first one in the front.

 

  Figure 6.2: A code that illustrates how to use ArrayDeque to add elements to a queue and remove the one in the front.

  Next, we're going to look at a popular interview question around queues.

  6.3. Exercise 8: Reversing a Queue – 07:44 min.

  One of the popular interview questions is reversing a queue. The interviewer gives you a queue like this [10, 20, 30], and asks you to give an algorithm for reversing it. After reversing the queue, the order is going to change. So we're going to get [30, 20, 10].

  Here's what I want you to do. I want you to create a new method in this class:

  Public static void reverse(Queue

  }

  and you should write the code in this method. However, you're only allowed to use the and isEmpty methods, nothing else. Also, as a hint, think about reversing your string. We've done that before (see section 5.5). How do we reverse our string? We need a similar approach here. Spend about 15 minutes on this exercise. When you're done, move to the next section to see my solution.

  6.4. Solution: Reversing a Queue – 08:50 min.

  Now, let's see how we can solve this problem. As I told you before, whenever you have a problem that involves doing something in the reverse order, you should use a stack. Earlier we used a stack to reverse a string. We can use the same technique to reverse a queue. Here's the idea.

  Let's imagine our queue Q looks like this: [10, 20, 30]. We create a stack S and remove items from this queue Q one by one and put them into the stack S. So, first we remove 10, put it in S, then we remove 20, put it in S, and finally do the same to 30. At this point, Q is empty and our stack S has three items. Now, 30 is the item on top of this stack, so it is the first item that we can remove.

  Now, we start removing each item from the stack, starting with the last item (30), one after the other until the stack is empty. So, we remove 30 and put it back in the queue. Next, we remove 20, put it back in the queue, and finally do the same to 10. Now, Q = [30, 20, 10] and S = [ ]. That’s the idea. Let’s now go ahead and implement this. The approach used here is very similar to the one used in the video. Figure 6.3 shows the implementation.

 

  Figure 6.3: A code for reversing a queue using a stack.

  The output of the program is:

  Original queue: [10, 20, 30]

  Reversed queue: [30, 20, 10]

  Explanation of the code

  We imported and Stack from the Java standard library.

  We create a queue Q and add elements 10, 20, and 30 to it using the add method.

  We print this queue on the console (our original queue).

  We create a stack S to hold the reversed elements.

  We use a while loop to remove items from Q using the remove method and push them onto stack S using the push method.

  After all elements are moved to stack S, we use another while loop to pop elements from stack S using the pop method and add them back to queue Q using the add method. This effectively reverses the order of elements.

  Finally, we print the reversed queue on the console (our reversed queue).

  In the next section, we're going to look at how to build a queue using an array.

  6.5. Exercise 9: Building a Queue Using Arrays – 11:07 min.

  There are basically three ways to implement a queue. We can store the items in an array, in a linked list or in a stack. So, if you're preparing for a technical interview, you should know all these implementations. First, we're going to look at implementing a queue using an array.

  Here's what I want you to do. I want you to create a class called As the name implies, this queue is implemented using an array, similar to the Arraydeque class in Java. Now, in this class, I want you to implement those standard queue operations like isEmpty and

  Here's a hint before you get started. Let's imagine we have an array of 5 elements to store the items in this queue. So, initially, our array is going to look like this: [0, 0, 0, 0, 0]. After adding three items to our queue, it should look like this: [10, 20, 30, 0, 0]. Here, we have two empty spaces for two additional items. As we remove items from our queue, they should be removed from the front.

  However, just like stacks, we're not going to physically remove this item from our array. We're not going to shift items around. Instead, we're going to use two pointers to determine where is the front, and where is the rear or back of our queue. So, we can set front F to 0, and this determines the front of our queue, and rear R to 3, and that determines the end of our queue. Watch the video to understand better what I want you to do.

 

Now, go ahead and spend 15 to 20 minutes on this exercise. You'll see my solution in the next section.

  6.6. Solution 1: A Basic Implementation – 13:11 min.

  The implementation shown in the video produces a result where 0 is substituted whenever an item is removed from the front. For example, [0, 20, 30, 0, 0] is the result produced when item 10 is cleared. This is how you were expected to solve this exercise.

  In Figure 1.6.4 however, I have modified the code to remove all zeros in the front of the queue automatically after items have been dequeued, and the remaining items are shifted to the left. This is how a queue works in the real world most of the time. Study the code in Figure 6.4A to 6.4C to understand how it is done.

 

 

Figure 6.4A: A basic implementation of building a queue using an array – Part A.

 

  Figure 6.4B: A basic implementation of building a queue using an array – Part B.

 

  Figure 6.4C: A basic implementation of building a queue using an array – Part C.

  Here’s the output of the code:

  Initial queue: [10, 20, 30, 0, 0]

  Output after removing item 10: [20, 30, 0, 0]

  Output after removing item 20: [30, 0, 0]

  If you try to add one more item (6th) to the queue, for example by adding queue.enqueue(0); in line 67, the capacity will be exceeded and you will get “Queue is full. Cannot at the top of the result:

  Queue is full. Cannot enqueue.

  Initial queue: [10, 20, 30, 0, 0]

  Output after removing item 10: [20, 30, 0, 0]

  Output after removing item 20: [30, 0, 0]

  This is one big problem with this implementation. We have added three items and removed two of them. So, currently we have one item in our queue. Given that our queue has a capacity for 5 items, we should be able to add 4 extra items to this queue, [30, 0, 0, 0, 0]. But that does not work as you can see from the above result.

  The problem comes from how we are setting the rear pointer in line 20. So, after we insert the fifth item, rear is going to be set to 5. That is the reason why we got the above error, since our array has a capacity of 5. So, that means the index of items in this array are going to be from 0 to 4. So, rear is now pointing outside of our array. In the next section, I'm going to show you how to solve this problem using circular arrays or circular queues

  6.7. Solution 2: Using Circular Arrays – 19:43 min.

  Before you see the solution in the video of this section, we’re going to first talk about the very important and powerful concept of circular arrays in data structures. A circular also known as a circular queue or circular is a data structure used in programming to efficiently manage a fixed-size collection of elements. It is particularly useful in scenarios where you want to maintain a rolling or cyclic buffer of items, and the most recent elements added to the buffer overwrite the oldest ones when the buffer becomes full.

  Imagine a linear array or list with a fixed size. When new elements are added to this array, they are placed in consecutive positions. If the array becomes full, adding more elements would require either expanding the array (which might not be feasible due to memory constraints) or removing elements to free up space.

  In a circular array, instead of expanding the array or rejecting new elements when the buffer is full, we reuse the space by treating the array as if it were circular. This means that when we reach the end of the array, we wrap around to the beginning, effectively creating a loop.

  Here's a high-level explanation of how a circular array works:

  Initialization: We create a fixed-size array and two pointers, often referred to as the "head" and "tail." The head points to the next available position

for inserting an element, and the tail points to the oldest element in the buffer.

  Adding Elements: When we add an element, we place it at the position pointed to by the head pointer and then move the head pointer forward. If the head pointer reaches the end of the array, it wraps around to the beginning.

  Removing Elements: When we remove an element from the buffer, we do so from the position pointed to by the tail pointer and then move the tail pointer forward. Again, if the tail pointer reaches the end of the array, it wraps around.

  Handling Full Buffer: If the head pointer catches up to the tail pointer, it means the buffer is full. In this case, adding a new element would overwrite the oldest element in the buffer, so we increment the tail pointer before moving the head pointer to insert the new element.

  I want you to watch the solution offered in the video now. As explained in the video, this circular behavior allows us to maintain a rolling buffer of elements without needing to constantly resize the array or allocate new memory. It's particularly useful for applications like data streaming, where we want to process a continuous flow of data while keeping only a limited history of the most recent data points.

  In programming, circular arrays can be implemented using modular arithmetic to handle the circular wrapping behavior efficiently. This modular arithmetic is well illustrated in the video of this section. Various

programming languages and libraries provide circular buffer implementations to simplify this process.

  Code Illustration of Circular Queue Data Structure

  A circular queue is the extended version of a regular queue (discussed in section 1.6.6) where the last element is connected to the first element, thus forming a circle-like structure. See Figure 6.5.

 

  Figure 6.5: Representation of a circular array/queue

  The circular queue solves the major limitation of the normal queue. In a normal queue, after a bit of insertion and deletion, there will be nonusable empty space. Circular Queue works by the process of circular

increment i.e., when we try to increment the pointer and we reach the end of the queue, we start from the beginning of the queue.

  Here, the circular increment is performed by modulo division with the queue size. That is,

  if REAR + 1 == 5 (overflow!), REAR = (REAR + 1)%5 = 0 (start of queue)

  6.7.1. Circular Queue Operations

  The circular queue work as follows:

  We have two pointers FRONT and REAR.

  FRONT tracks the first element of the queue.

  REAR tracks the last elements of the queue

  Initially, we set value of FRONT and REAR to -1

  1. Enqueue Operation

  check if the queue is full.

  for the first element, set value of FRONT to 0.

  circularly increase the REAR index by 1 (i.e. if the rear reaches the end, next it would be at the start of the queue).

  add the new element in the position pointed to by REAR.

  2. Dequeue Operation

  check if the queue is empty.

  return the value pointed by FRONT.

  circularly increase the FRONT index by 1.

  for the last element, reset the values of FRONT and REAR to -1.

  However, the check for full queue has a new additional case:

  Case 1: FRONT = 0 && REAR = = SIZE - 1

  Case 2: FRONT = REAR + 1

  The second case happens when REAR starts from 0 due to circular increment and when its value is just 1 less than the queue is full. Figure 6.6 illustrates the enqueue and dequeue operations.

 

  Figure 6.6: Illustration of enqueue and dequeue operations.

 

Figures 6.7A to 6.7E show my implementation of the circular array/queue in Java. Notice that I created a new class called

 

  Figure 6.7A: An implementation of the circular array/queue in Java – Part A.

 

  Figure 6.7B: An implementation of the circular array/queue in Java – Part B.

 

  Figure 6.7C: An implementation of the circular array/queue in Java – Part C.

 

  Figure 6.7D: An implementation of the circular array/queue in Java – Part D.

 

  Figure 6.7E: An implementation of the circular array/queue in Java – Part E.

  When you run this code, here’s the result you will get:

  Queue is empty

  Inserted 1

  Inserted 2

  Inserted 3

  Inserted 4

  Inserted 5

  Queue is full

  Front -> 0

  Items ->

  12345

  Rear -> 4

  Deleted Element is 1

  Front -> 1

  Items ->

  2345

  Rear -> 4

  Inserted 7

  Front -> 1

  Items ->

  23457

  Rear -> 0

  Queue is full

  6.8. Exercise: Building a Queue Using Stacks – 25:37 min.

  Another very popular interview question is implementing a queue using a stack. Let's imagine we have a queue with these three items: [10, 20, 30]. How can we implement a queue and use a stack to store these items?

  Here's the tricky part. With queues, items are added at the end of the queue. So, if we add 40 and 50, the queue now looks like this: [10, 20, 30, 40, 50]. But when we remove items, they get removed from the front. For example, when the first item is removed from the front, the queue now looks like this: [20, 30, 40, 50]. This is the exact opposite of how stacks work.

  So, with a single stack, we cannot store the items in a queue. But we can use two stacks. With two stacks, we can move items around, and change the order at which they get removed from the queue. That was a hint for you. With that, I want you to spend about 20 minutes on this exercise. It's really not that difficult. If you put all your focus on this, I'm sure you can solve this problem on your own.

  6.9. Solution: Building a Queue Using Stacks – 26:32 min.

  The algorithm to build a queue using two stacks is often referred to as the algorithm. The algorithm and its implementation were well explained in the video. However, I want to give more explanation here.

  This algorithm involves using one stack for enqueue (adding elements) and the other stack for dequeue (removing elements) operations. The key idea is to maintain the order of elements as they would be in a queue.

  Here's a step-by-step explanation of how the algorithm works:

  1. Initialization:

  Create two empty stacks: S1 (for enqueue) and S2 (for dequeue).

  2. Enqueue Operation:

  When you want to enqueue an element into the queue, push it onto stack S1. Since stacks work in a Last-In-First-Out (LIFO) order, the most recently added element will be at the top of S1, which corresponds to the back of the queue.

  3. Dequeue Operation:

  When you want to dequeue an element from the queue:

 

  Check if stack S2 is empty. If S2 is empty, pop all elements from S1 and push them onto S2. This step effectively reverses the order of elements in S1 and makes the oldest element (front of the queue) at the top of S2. Pop the top element from stack S2. This is equivalent to dequeuing the front element of the queue.

  4. Empty Check:

  The queue is empty if both stacks S1 and S2 are empty.

  Note that this algorithm ensures that the order of elements is maintained in the queue, where the front of the queue corresponds to the top of stack S2 after reversing. It's important to consider the efficiency of this algorithm. Enqueue operations are generally straightforward and efficient since they directly involve pushing elements onto stack S1.

  However, dequeue operations can be less efficient, especially when S2 needs to be populated by reversing the elements from S1. In the worst case, this can lead to an O(n) time complexity for dequeue operations, where n is the number of elements in the queue.

  Despite the potential inefficiency for dequeue operations, the two-stack queue algorithm provides a simple way to implement a queue using two stacks and can be useful in scenarios where enqueue operations are much more frequent than dequeue operations.

 

Figures 6.8A to 6.8B show another implementation of this algorithm. Note that a new class called TwoStackQueue is created first.

 

  Figures 6.8A: An implementation of the two-stack queue algorithm – Part A.

 

  Figures 6.8B: An implementation of the two-stack queue algorithm – Part B.

  When you run this code, here’s the result you will get:

  Dequeued: 10

  Dequeued: 20

  Dequeued: 30

  Dequeued: 40

  Dequeued: 50

  Now here's a question for you. What are the runtime complexities of the enqueue and operations when we use two stacks to implement a queue? Think about it for a few seconds.

  Here's the answer. Enqueue is an O(1) operation because we simply add an item on top of our stack. But what about dequeue? Well, in the worst case scenario, we have to iterate over all the items in the first stack, and move them to the second stack. So, the cost of this operation increases linearly according to the size of our queue. So, here we have an O(n) operation.

  6.10. Priority Queues – 34:15 min.

  You have learned how to work with queues and build them from scratch. That's great. Now, we're going to talk about a special type of queues called priority queues. In priority queues, objects are processed based on their priority, not the order in which they join the queue.

  As an analogy, imagine you have a pile of mails to open, but you don't want to open them now. You just want to order these mails based on their priority. So, if they're high priority, you put them on the top, otherwise you put them on the bottom. So, these mails are not ordered based on when you receive them. They're ordered based on their priority.

  Therefore, a priority queue is an abstract data structure that stores a collection of elements, each associated with a certain priority. It allows for efficient retrieval of the element with the highest (or lowest) priority among the elements currently in the queue. Priority queues are commonly used in scenarios where you need to manage tasks or items based on their relative importance or urgency.

  6.10.1. Key Features of a Priority Queue

  1. Element-Priority Association: Each element in the priority queue is associated with a priority value. This priority value determines the order in which elements are removed from the queue.

  2. Ordering Property: Priority queues maintain an ordering property that ensures that the element with the highest (or lowest) priority is always at the front of the queue. This element can be quickly accessed and removed.

  3. Two Main Operations: The main operations supported by a priority queue are:

  Insertion/Enqueue: Adds an element with a given priority to the priority queue. Extraction/Dequeue: Removes and returns the element with the highest (or lowest) priority from the priority queue.

  4. Efficient Operations: Priority queues are designed to have efficient insertion and extraction operations. Depending on the specific implementation, insertion and extraction can be accomplished in O(log n) time, where n is the number of elements in the priority queue.

  6.10.2. Applications of Priority Queues

  Priority queues are commonly used in various applications, such as:

  Task Scheduling: Managing tasks based on their urgency or importance.

  Dijkstra's Algorithm: Finding shortest paths in graph algorithms.

  Huffman Coding: Creating efficient variable-length codes for data compression.

  Event Simulation: Managing events in simulations based on their scheduled times.

  There are multiple ways to implement priority queues, each with its own trade-offs in terms of time complexity and space efficiency. Some common implementations include:

  Binary Heap: A binary heap is a binary tree that satisfies the heap property, which ensures that the highest (or lowest) priority element is at the root. Binary heaps can be implemented using arrays.

  Fibonacci Heap: A more advanced data structure that supports efficient decrease-key operations in addition to insertion and extraction.

 

Binomial Heap: A collection of binomial trees that supports efficient merging of heaps.

  In summary, a priority queue is a fundamental data structure that allows you to manage elements with associated priorities efficiently. It's an essential tool in algorithm design and various applications that involve ordering elements based on their relative importance.

  Now let me show you how to work with priority queues in Java. One example was written and explained in the video but Figure 6.9 shows another simple Java code example that demonstrates how to work with a priority queue using integers 1, 2, 3, and 5. A new class called PriorityQueueExample is created in the implementation.

 

  Figures 6.9: Example of one implementation of priority queue.

  In this example, we use the PriorityQueue class from the java.util package. We enqueue the integers 3, 1, 5, 2 into the priority queue. As we dequeue

elements using the poll method, they are removed from the priority queue in order of their priority, resulting in the following output in ascending order:

  Dequeued: 1

  Dequeued: 2

  Dequeued: 3

  Dequeued: 5

  The elements are dequeued in ascending order because the default behavior of PriorityQueue is to dequeue elements with the lowest value (min-heap behavior). If you want to use a max-heap behavior to dequeue elements with the highest value, you can initialize the priority queue with a custom comparator:

  PriorityQueue priorityQueue = new PriorityQueue((a, b) -> b - a); // Maxheap behavior

  This will output the elements in descending order:

  Dequeued: 5

  Dequeued: 3

  Dequeued: 2

  Dequeued: 1

  6.11. Exercise:  Building a Priority Queue – 36:09 min.

  Next, you're going to build a priority queue from scratch. In this section, we're going to look at one way to implement a priority queue. There are basically two ways to implement priority queues. We can use arrays or another data structure called We're going to look at heaps in volume 4 of the series. For now, let's look at implementing a priority queue using an array. So, as an exercise, I want you to create a new class called

  Now, let's imagine in this queue we have these numbers [1, 3, 5, Now we want to insert In a regular queue, 2 should join the back of the queue, but in this priority queue we should insert two between 1 and like this: [1, 2, 3, 5, So, all the items are sorted in ascending order.

  Now, we could also sort this in descending order, as explained in the previous section, but let's not worry about that for now. Let's just focus on having a queue where the items are sorted in ascending order. So, how can we find the right position to insert a new number? Let's come up with an algorithm. So I'm going to remove 2 and start over: [1, 3, 5,

  One algorithm to solve this problem was presented in the video but here's another one.

  1. Enqueue 2:

  Start with the initial priority queue: [1, 3, 5, 7].

Enqueue the number 2: [1, 3, 5, 7, 2].

  2. Adjust Priority:

  Compare the newly added element (2) with its parent (7). Since 2 is smaller than its parent 7, swap them: [1, 3, 5, 2, 7].

  3. Adjust Priority Again:

  Compare the newly added element (2) with its parent (5). Since 2 is smaller than its parent 35 swap them: [1, 3, 2, 5, 7].

  4. Adjust Priority Again:

  Compare the newly added element (2) with its parent (3). Since 2 is smaller than its parent 3 swap them: [1, 2, 3, 5, 7].

  The final state of the queue after insertion and adjustments will indeed be [1, 2, 3, 5, 7], as intended. This algorithm ensures the correct order of elements in the priority queue.

  With all this, I want you to spend 15 to 20 minutes to build a priority queue. Think about all the edge cases, like what if our array is empty? What if it has a single item? What if it has an even number of items? What if it gets full? Think of all these scenarios. When you're done with your solution, move on to the next section to see my solution.

  6.12. Solution Building a Priority Queue – 40:06 min.

  In the video, two classes were used for this exercise. These are the class Main.java and the class The code worked well after it was written but it needed refactoring to make it cleaner. The refactored code is presented in section 6.13.

  6.13. Solution 2:  Refactoring Our Code – 48:57 min.

  Figures 6.10A to 6.10C show the refactored code implementation.

 

  Figure 6.10A: Implementation of a priority queue for exercise 6.11 – Part A.

 

  Figure 6.10B: Implementation of a priority queue for exercise 6.11 – Part B.

 

  Figure 6.10C: Implementation of a priority queue for exercise 6.11 – Part C.

  6.14. Wrap up – 51:59 min.

  Let's quickly recap all the key points you learned in this section. Queues are FIFO data structures, which means the first item that is inserted is the first one that can be removed. We use queues in situations where we have a resource and this resource should be shared amongst many consumers. So, this consumer should line up and use the resource one by one. That is when we use a queue. We also have priority queues where items are processed based on their priority. See Figure 6.11.

 

  Figure 6.11: Key points about queues.

  Figure 6.12 shows the operations supported by queues. All these operations, run in constant time or O(1) because items are added or removed from the ends.

 

  Figure 6.12: Operations supported by queues.

  Having said that, adding a new item to a priority queue implemented using an array is an O(n) because items may have to be shifted. This is all about queues. In the next section, we're going to look at hash tables.

  7. Hash Tables

  Video Part 3 > 7. Hash Tables (01:07:51min)

  7.1. Introduction – 00:00 min.

  In this section we're going to talk about hash tables, also called dictionaries. Hash tables give us super fast lookups and we can use them to optimize a lot of algorithms. That's why they come up a lot in interviews. So, let's see what hash tables are, how they work, and how we can use them to solve real problems.

  7.2. What are Hash Tables – 00:27 min.

  A hash table, also known as a hash is a data structure that provides a way to store and retrieve data quickly based on a key. It is designed to offer efficient insertion, deletion, and retrieval operations. Hash tables are commonly used in computer science for tasks like database indexing, caching, and implementing associative arrays or dictionaries.

  The core idea behind a hash table is to use a hash function to convert the input key into an index or a location within an array. This index is used to store and retrieve the associated value. Hash functions take in a key as input and produce a fixed-size value (usually an integer), which is then used as the index in the array.

  7.2.1. Benefits & Applications of Hash Tables

  The main benefits of hash tables are their constant-time average-case complexity for insertion, deletion, and retrieval operations. This is possible when the hash function is well-designed and distributes the keys evenly across the available indices in the array. However, collisions can occur when two different keys produce the same hash value, resulting in a situation where two or more keys are mapped to the same index. There are various techniques to handle collisions, including:

  Chaining: Each index in the array contains a linked list or another data structure to store multiple key-value pairs that hash to the same index.

  Open Addressing: In case of a collision, the algorithm probes for the next available slot in the array until an empty slot is found.

  Choosing an appropriate hash function is crucial for the efficiency of hash tables. A good hash function should distribute keys uniformly to minimize collisions. Additionally, the array size should be chosen carefully to avoid excessive collisions or wasting memory.

  Hash tables are widely used due to their speed and efficiency, but they have some limitations as well. For example, resizing the hash table can be computationally expensive, and a poor hash function choice can lead to performance degradation. Nonetheless, when implemented correctly, hash

tables provide an essential tool for solving various problems in computer science and software engineering.

  Therefore, hash tables are my personal favorite data structures because they give us super fast lookups and have a lot of applications. We use them in spellcheckers. For example, using a hash table, we can quickly look up a word amongst tens of thousands of words in less than a second. They're also used in building dictionaries.

  Again, we can quickly look up a word and find its translation in another language. They're also used in compilers. For example, compilers use hash tables to quickly look up the address of functions and variables. They're also used in code editors, literally anywhere you want to look up an item super fast.

  Most if not all programming languages have support for hash tables, but under different names. In Java for example, they're called hash maps. In JavaScript, they're called in C sharp and Python, they're called They're exactly the same thing.

  7.2.2. How Hash Tables Work

  Now, let's see how they work and why they give us super fast lookups. At a very high level, we use hash tables to store key-value pairs. Let's say you want to store a list of employees and be able to quickly look up an employee by their employee number. Since every employee has a unique employee number, we can use this number as a key and the employee object as the value. This is illustrated in Figure 7.1.

 

  Figure 7.1: An example of a hash table’s key-value pair

  Now, let's say want to store an employee object in our hash table. Our hash table takes the employee number, passes it to what we call a hash and this hash function will tell where the employee object should be stored in memory. Figure 7.2. Our hash table will then store this employee object at that location.

 

  Figure 7.2: How a hash table works.

  Now, we want to look up an employee by their employee number. Our hash table, once again passes the employee number to this hash function, and it'll figure out where this employee object is stored. So, it'll grab and return it for us.

  Now, the interesting thing about this hash function is that it is which means every time we give it the same input, it'll return the same value. This is why we can use it for both storing and retrieving objects. That's the basic idea of hash tables. Internally, a hash table uses an array to store our objects. We'll look at that in more detail later in this section.

  7.2.3. Operations Supported by Hash Tables

  Here are the operations supported by hash tables.

  Insert - O(1) Lookup – O(1) Delete  - O(1)

  All these operations run in O(1) because the hash function tells us where in memory we should store an object or look it up so we don't have to iterate over the entire array of objects. Having said that, in the worst case scenario, these operations may run in O(n), but practically speaking, this is something that rarely happens. So, most people have come to this agreement that hash table operations run in O(1).

  Next, I'm going to show you how to work with hash tables in Java.

  7.3. Working with Hash Tables – 03:11 min.

  In this section, I'm going to show you how to work with hash tables in Java. In Java we have a map which is a contract that declares all the methods that map data structures should provide. Let’s have a quick look at the mapping interface documentation here: Under “All Known Implementing Classes”, you can see various implementations of this interface. Figure 7.3.

 

  Figure 7.3: Various implementations of the mapping interface.

  80% of the time you're only going to use 20% of these implementations. The one that you'll use most of the time is We also have Hashtable, which is an older implementation. It's considered legacy and you shouldn't use it in new applications. So. most of the time we use HashMap. We also have ConcurrentHashMap, which is used in multi-threaded applications, that is, applications that have multiple threads working with HashMap. That is an advanced topic, so let's not worry about it.

 

Now, back to intelliJ. Let's say we want to store our employees in a hash map. We need a key and a value for the key. We're going to use the employee number. Let's imagine that this is going to be an integer for the value. We're going to store employee's names, so the value is going to be a string. We will also import HashMap as follows: import Java.util.HashMap; .

  One useful method in the Map interface is map.put which we will use to add a key value pair in our hash map. Let's say employee number 1 is Daniel, employee number 2 is Michael and employee number 3 is John. Figure 7.3 shows the completed code.

 

  Figure 7.4: Implementation of java code to store employees numbers and names in a hash map.

  Now, when we print this hash map on the console, we get this result:

  {1=Daniel, 2=Michael, 3=John}

  Now, what if we insert a key value pair in line but the key already exists (duplicate map key of 3), but this time the value is going to be different. Let's

say,

  map.put(3, “Marianne”)

  What do you think is going to happen? Give it a try before you look at the result below!

  {1=Daniel, 2=Michael, 3=Marianne}

  So, the previous value (John) was overwritten with the new value (Marianne). In hash maps, we cannot have duplicate keys. But what about null values? What if we add a key value pair but the value is That is, we insert this in line

  map.put(4, null);

  Here’s the updated result:

  {1=Daniel, 2=Michael, 3=Marianne, 4=null}

  That is perfectly fine. We can store null values in hash maps. But what about null keys? For example, if we change line 12 to

  map.put(null, null);

  the updated result is

  {null=null, 1=Daniel, 2=Michael, 3=Marianne}

 

As you can see, that is perfectly fine as well. Now, does this have an application in the real world? Not that I can think of! But this is something that sometimes they ask you in They ask you if hash maps allow null keys or null values. The answer is yes to both these.

  We also have the remove() method for removing a key value pair. For example, if we insert this in line 13,

  map.remove(null);

  intelliJ automatically inserts the word key before null and rewrites line 13 as follows:

  map.remove( key:null);

  Here’s the updated result is

  {1=Daniel, 2=Michael, 3=Marianne}

  Now that key null is gone. Before we continue, let’s first delete line 11, that is, the entry map.put(3, "Marianne"); from our code.

  Now, we can also get the value of a key using the get() method. So if we write this will return a string because our values are string objects. So we store it with a variable called value as follows:

  Var value = map.get(3);

 

and then use the following line to print the result on the console:

  System.out.println(value);

  Here’s the result:

  John

  We can also check the existence of a key or a value using

  map.containsKey(3);

  and

  map.containsValue(“Daniel”);

  Now, both these methods return a Boolean value, but they have different time complexities:

  map.containsKey(3) runs in O(1), while map.containsValue(“Daniel”) runs in O(n).

  Here's the reason, when we call the containskey method, our hash map will get this key and pass it to its hash function. It'll figure out where this object should be stored. Then it'll look it up. If there is an object at that location, it'll return true, otherwise it'll return false. So, this operation is super fast. It doesn't involve iterating over all the objects.

 

In contrast, when we call the containsValue method, our hash map cannot rely on its hash function. It has to iterate over all the objects and compare the value with Daniel argument. Please watch the video for this section to learn more about hash map. Next, we're going to look at a popular interview question that involves using hash tables.

  7.4. Exercise: First Non-repeated Character – 09:18 min.

  A super popular interview question is finding the first non-repeated character in a string. For example, let's say we have a string, like “A Green Apple”. In the string, A is repeated twice. G is not repeated. R is not either. E is repeated three times, N is not repeated and so on. So, you should write a method that would return G as the first non repeated character in this string. Now for simplicity, let's not worry about uppercase and lower-case characters. Let's imagine all of these are in lowercase, like this: “a green apple”.

  This is a fantastic exercise that trains your programming brain. So, do your best to solve it on your own, even if it's going to take half an hour. But trust me, it's easier than what you think. So spend about 15 minutes on this exercise and then see my solution in the next section.

  7.5. Solution: First Non-repeated Character – 10:13 min.

  To solve this problem, we need to iterate over the string “a green apple” from the beginning to the end, and count the number of times each character has been repeated. Now, what data structure is ideal for is looking up items quickly. So, we can store each character and the number of times it’s repeated in a hash table.

  So, as we iterate over the string, we get the current character and quickly look it up in our hash table. If we have it, we'll increment the number of times it's repeated. Otherwise, we'll add it to our hash table. So, after this iteration, our hash table is going to look as follows.

  We're going to have a key value pair like this: a = 2 because we have two a’s in the string. We also have a white space that is repeated twice. So, = 2. What about g? It’s not repeated. So, g = 1. So in the first step we should build this hash table:

  a=2

   =2

  g=1

  Once we're done, we'll iterate over the string from the beginning one more time. We get the current character and look up the number of times it's been repeated. If it's more than one, we're going to ignore it and go to the next

character. For example, white space is also repeated twice. We move on. We get to g. Now, how many times is g repeated in this string? Only once. So, this is the first non repeating character. We stop our iteration right away and return that character. That is our algorithm. A good implementation was already explained in the video.

  But here’s another similar implementation that produces the same result.

  First, we're going to add a new class called Figures 7.5A to 7.5B show my implementation.

 

  Figure 7.5A: Implementation of Java code snippet that uses a hash table to find the first non-repeated character in the string "a green apple" – Part A

 

  Figure 7.5B: Implementation of Java code snippet that uses a hash table to find the first non-repeated character in the string "a green apple" – Part B.

  This code defines a findFirstNonRepeatedChar function that takes a string as input and uses a hash map to count the frequencies of characters in the string. Then, it iterates through the string to find the first character with a frequency of 1, which indicates a non-repeated character. The main function demonstrates how to use the findFirstNonRepeatedChar function with the given input string.

  When you run this code, the result simply says “g” because it is the first nonrepeating character. You can change the input string if you like to test the code further. If there’s no first non-repeating character in your string, the following output will be returned:

  No non-repeated character found in the string.

  7.6. Sets – 17:52 min.

  In Java and many other languages, we have a data structure that is similar to hash maps or hash tables, but slightly different. In the last section, you learned that maps or hash tables or hash maps, whatever you want to call them, basically map a key to a value. Now, we have another data structure called sets. Sets only have keys; they don't have values. I might ask, what is the point of this? Well, they're actually very useful in solving a lot of problems because sets, just like maps, don't allow duplicate keys.

  Let's say we have a list of numbers [1, 2, 3, 3, 2, 1, 4, 5]. In this list we have duplicates. Now we want to remove these duplicates. All we have to do is to get each item and add it to a set. Because sets don't allow duplicates, we'll end up with a unique list of values in this list. That's one application of sets.

  Now, similar to maps, we have an interface called Set< Let’s have a quick look at the Set interface documentation here: This is a generic interface. So, we have to specify the type of keys that we want to store in this set. As you can see in Figure 7.6, this set has several different implementations. Most of the time we use HashSet.

 

  Figure 7.6: Various implementations of the Set interface.

  Now, let's see how we can use a HashSet to remove duplicates in a list. First, we create a public class called One implementation was already explained in the video for this section but another one is shown in Figure 7.7.

 

 

Figure 7.7: Implementation of Java code snippet that uses a HashSet to remove duplicates from the list [1, 2, 3, 3, 2, 1, 4, 5].

  In this code, we use a HashSet to store the unique elements from the input list. HashSet automatically ensures that duplicate values are not allowed. Then, we clear the input list and add the unique elements back to it. The final result is a list without any duplicates:

  [1, 2, 3, 4, 5]

  That's all we had to do. And here we have a unique list of values. Now, just like the map interface here, we have a remove() method. Furthermore, we can remove one object or all objects in a set. We can also check to see if we have a given number or a given key more accurately. We can get the size of the set, we can clear the set, and we can iterate over it, just like iterating over maps. So, sets are pretty easy. Next, we're going to look at another interview question.

  7.7. Exercise: First Repeated Character – 20:16 min.

  Another popular interview question is the variation of the exercise we had earlier. Let's say we have a string like “green apple”. We want to find the first repeated character. So, “g” is not repeated here. We move on. “r” is not repeated either. “e” is repeated three times. So “e” is the first repeated character. This is fairly easy. You can figure it out in 5 to 10 minutes. You'll see my solution in the next section.

  7.8. Exercise: First Repeated Character – 20:48 min.

  Now, here's the algorithm we need for solving this problem. We iterate over the string “green apple”. We get each character, and we need to see if we have seen that character before or not. So, we need a data structure so that we can quickly look up a value. Obviously, we can use a hash table. So, our keys are going to be these characters, but what is going to be the value of these keys?

  In this case, we don't really care about the number of times each character has been repeated because we just want to find the first repeated character. So, for this particular exercise, it's better to use a set because all we need is a set of characters - the characters that we have seen before. We don't care how many times they have been repeated. In the implementation shown in Figure 7.8, I created a class called FirstRepeatedCharacter.

 

 

Figure 7.8: Implementation of Java code snippet that uses a HashSet to find the first repeated character in the string "green apple".

  In this code, we use a HashSet to keep track of characters we have encountered so far. We iterate through the characters in the input string, and for each character, we check if it already exists in the HashSet. If it does, we've found the first repeated character and return it. If not, we add the character to the HashSet and continue. If no repeated character is found, we return the null character '\0'.

  When we run this code, as we expected the output is:

  The first repeated character is: e

  7.9. Hash Functions – 23:24 min.

  In the previous sections, you learned how we can use hash tables or hash maps to solve programming problems. Now going forward, we're going to dig in and see how these data structures work internally and why they're super fast.

  Earlier I told you that when we insert a key-value pair in a hash table, for example, map.put(1, the hash table will get the key (1). Based on this, it'll figure out where to store the value John in memory. But technically, we store values in an array. So, in more accurate terms, the hash table should map the key to an index value, the index at which we should store this value. This is the job of a hash function.

  So, a hash function is a function that gets a value and maps it to a different kind of value, which we call a hash value, a hash code, digest, or just hash. Now, in the context of data structures, a hash function maps a key value to an index value. Let's see how that works. In map.put(1, “John”), if our employee number or key is 1, we can easily store the value John at the index 1. So, if we have an array of employees, we can simply set items[1] = “John”, right? But what if the key is a very large number, say map.put(123456, “John”), and we only have 100 employees?

  We don't want to create a super large array just to store 100 employees. Let's say our array has a capacity of 100 items. So, we should map the employee number 123456 to a number between 0 to 99 because 99 is the index of the last item in our array of 100, right? How can we do this? How can we map the number 123456 to this range (0 to 99)?

  You learned that earlier in the course. We use the modulus operator (section 6.7). So, we calculate the remainder of division of 123456 by the size of our array (100). Now, Figure 7.9 shows my simple implementation of a hash function for this exercise. Note that I created a class called

 

  Figure 7.9: Implementation that uses a Hash function to find out the index to store "John" in an array of 100 employee when the key is an integer.

  In this code, the hash function takes the employee number and the capacity of our array as parameters and calculates the index using a modulo operation. This will ensure that the index is within the range of 0 to 99 for an array with a capacity of 100. The result, as you can see at the bottom left of Figure 7.9, is 56.

  But what if our keys were strings? Let's change integer 123456 to a string containing a hyphen and some letters, say "A-123456BK". So, let’s say this is the new employee number for John. How can we convert this to an index?

Well, we can use In Java, hashCode is a built-in method that is available on all objects. It returns an integer value that is generated based on the content of the object. The primary purpose of the hashCode method is to provide a way to efficiently organize and retrieve objects in data structures like hash tables, hash sets, and hash maps.

  Once again, we can apply the modulus operator between that number and 100. Take a look at my implementation in Figure 7.10. The result, as you can see at the bottom left of Figure 7.10, is 28.

 

  Figure 7.10: Implementation that uses a Hash function to find out the index to store "John" in an array of 100 employees when the key is a string.

  7.9.1. How HashCode Works

  Here's how the hashCode method works:

  When you call hashCode on an object, it generates an integer value based on the object's internal state. This value is typically calculated using a mathematical algorithm that takes into account the object's fields and properties.

  The generated hash code is not guaranteed to be unique for each object. In fact, it's common for different objects to have the same hash code code Therefore, hash codes are used as a starting point for organizing objects in hash-based data structures.

  Hash-based data structures, like HashMap, HashSet, and others, use the hash code to determine which "bucket" an object should be placed in. Buckets are essentially storage locations within the data structure.

  When you want to retrieve an object from a hash-based data structure, you provide a key (e.g., a string or an integer), and the data structure uses the key's hash code to quickly identify the bucket where the object should be located. This significantly reduces the search time compared to searching through all objects linearly.

  In the code provided in Figure 7.10, I used the hashCode method on the string key (the employee number) to generate a hash code. I then took the

absolute value of that hash code and used modulo to map it to an index within the capacity of an array. This allows us to efficiently map the string key to an index in the array, ensuring that "John" is stored in a specific location based on the employee number.

  There're so many algorithms for calculating hashes. Some of these algorithms are used in cryptography. For example, when storing people's passwords, we don't want to store them in plain text, so we pass them to hash function which will generate a long complicated string based on that password, something like this: 289123098asdlasdl1238230409sdfiulk. This will be a hash value.

  In the context of cryptography, this is not going to be an index value. This is just a hash value. But in the context of data structures or hash tables, this hash function should return an index value. This is where we want to store an item in an array. That's the basic idea about hashing.

  7.10. Collisions – 29:19 min.

  Now that you’ve learned how hash functions work, it’s important you also know that when generating hash values, it is possible that two distinct keys generate the same hash value. Let's say our hash function generates the hash value 10 for restoring these two values (Figure 7.11), what are we going to do?

 

  Figure 7.11: An illustration of collision when generating hash values

  We cannot store two items at the same index. This is what we call a collision. If you're not a native English speaker, collision means accident, or a crash, just like when two cars crash. So, we have two values that are colliding.

  7.10.1. How to Handle Collisions

  Now, there are two ways to handle collisions. One way is to have each cell in our array point to a linked list. So, we're not going to store the values in the array itself. We're going to store them in these linked lists. If we have a collision, we simply add the new item at the end of the linked list. This is what we call chaining because we're chaining these items. Figure 7.12.

 

  Figure 7.12: An illustration of chaining: one way of handling collision

  Another solution is to find a different slot for storing the second value. This is what we call open addressing because we're finding a new address to store the second value. There are different open addressing algorithms, and we're going to look at them over the next few sections.

  7.11. Chaining – 30:26 min.

  Over the next few sections, we're going to look at a few different ways to handle collisions. In this section, we're going to look at chaining, which involves using a linked list to store multiple items at the same array index. Let's look at a real example. Let's say the size of our hash table is 5. So we have an area of 5 cells (0 to 4) for storing items. We refer to these cells as buckets or

  Initially, all our buckets are null or empty. Now we want to store the keyvalue pair K = 6 and V = A in our hash table. Figure 7.13. The key is 6 and the value is A. So, we pass the key to our hash function which will return the remainder of division of 6 by 5, which is 1. So, we should store this value at index 1. But we're not going to store this value directly in the cell. Instead, we're going to wrap it in a linked list’s node and have the cell at index 1 point to this node.

 

  Figure 7.13: How to store the key-value pair K = 6 and V = A in a hash table using node A of a linked list.

  Now, we have another key-value pair K = 8 and V = B. Where should we store this pair? Well, if we give 8 to our hash function, we'll get 3 (remainder of 8/5). So, we should store the value at index 3, but again, we're not going to store it directly in the cell. Instead, we're going to store it in a linked list. Let's look at another example.

  In yet another key-value pair, K = 11 and V = C, our key is 11. If we give it to our hash function, we'll get 1 (remainder of 11/5). So, we should go to the cell

at index one and store this value at the end of the linked list there. Figure 7.14.

 

  Figure 7.14: How to store three key-value pairs in a hash table using linked lists.

  This is the idea behind chaining. We're basically chaining a bunch of items together. That's where the name comes from. With this approach, we no longer have collisions and these linked lists can grow or shrink automatically. Next, we're going to look at another strategy for handling collisions.

  7.12. Linear Probing – 32:06 min.

  There is another strategy for handling collisions. We call it open With this approach, we don't store values in linked lists. We store them directly in array cells or slots. Let's see this in action. Just like the previous section, let's say we have a hash table with 5 slots, and we want to store the key-value pair K = 6 and V = A. Since the key is 6, our hash function is going to return 1 (remainder of 6/5). So, we should store this key-value pair at the slot with index 1. Figure 7.15.

 

  Figure 7.15: Illustration of linear probing

  Next, we're going to store another key-value pair, K = 8 and V = B. Similarly, this pair is going to get stored at the slot with index 3 (remainder of 8/5). Easy peasy. Now, what about this other key value pair: K = 11 and V = C? If we pass this key to our hash function, we'll get 1 (remainder of 11/5), but there is already an item stored in this slot. So, we have collision. Now to solve this, we have to look for another empty slot.

  This is called which means searching. We have to search for another location, and this is the reason why this approach is called open addressing (because the address of a key value pair is not determined by the hash function). We have to search for another empty two slot.

  7.12.1. Linear Probing

  Now, we have three searching or probing algorithms. The first one, which we're going to talk about in this section is linear With this algorithm, we start from the current slot. If it's full, we go to the next slot. If it's full, once again, we go forward until we find an empty slot. Now, what if we can't find any empty slots? That means our table is full. This is one of the drawbacks of using the open addressing strategy. With the chaining strategy, we don't have this problem because our linked lists can grow automatically.

  Back to the business, in this example, we should store this key value pair at index 2 in Figure 7.15. Here's a formula for linear probing:

  Linear Probing: hash (key) + i

  We start with a hash value and then increment it by 1 at each step. So, i here is like a loop variable that starts with 0 and gets incremented until we find an empty slot. Okay? Now, because we're incrementing i in every step, it is possible that it ends up being outside the boundary of our array. So, we should apply the modulus operator and reduce the result to a range that can fit in the array. Now there is a problem with this linear probing. These three items stored next to each other (Figure 7.16) form a cluster.

 

  Figure 7.16: Illustration of cluster in linear probing

  Next time we want to add a new key value pair where the key falls in this range, our probing is going to take longer. We have to pass all these items in the cluster and add the new item at the end of the cluster. As a result, our cluster will get bigger, and again, this will make future probing slower. Figure 7.17.

 

  Figure 7.17: A new key-value pair added at the end of the cluster

  In the next section, we'll look at another probing algorithm that attempts to solve this problem.

  7.13. Quadratic Probing – 34:48 min.

  As you saw in the last section, clusters can form when we use the linear probing algorithm, and these clusters reduce performance. To solve this problem, we can use quadratic probing. I know it's a fancy word, but quadratic comes from which means The formula for linear probing is shown again on the left in Figure 7.18. If we change i to , now we have a quadratic equation.

 

  Figure 7.18: Comparison of linear probing and quadratic probing

  How is this going to solve our problem? How is this going to prevent clustering from forming? Well, with linear probing, we increment i by 1 in each step. So, new key-value pairs get stored next to each other and form a cluster. But if you raise i to the power of 2, our key-value pairs are going to get spread out more widely.

  Now just to clarify, I've simplified the formula for calculating the index as shown below. Here, we don't see the modulus operator, but in both algorithms, it is possible that i gets larger than the boundary of our array. So, we should always reduce it to the size of the array.

  (hash(key) + % table_size

  So, the quadratic probing solves the clustering problem, but it has another problem! Because we're doing big jumps to find an empty slot, we may get back to the beginning of the array and end up making the same steps. So, we may end up in an infinite loop! With linear probing, we wouldn't have this problem because we may find an empty slot right after the current slot. Next, we'll look at another algorithm that solves this problem.

  7.14. Double Hashing – 36:17 min.

  You learned in the last section that with quadratic probing, you may end up with an infinite loop because the algorithm generates the same steps 1, 4, 9, and then we might jump to the beginning of the array again, 1, 4, 9, and so on. This is where double hashing comes to the rescue.

  With this algorithm. Instead of i or we use a separate independent hash function to calculate the number of steps. Here's a popular second hash function:

  hash2(key) = prime (key % prime)

  The prime we have in the function should be a prime number smaller than the size of our table. Now where does this come from? Honestly, I don't know! But experts have figured out that this formula works well. Here's how we can calculate the index using the double hashing algorithm:

  (hash1(key) + i* hash2(key)) % table_size

  Don't panic! It's actually easier than what you think. Just like before, we start with an initial hash value, hash1(key). Now, we need to calculate the steps, i* hash2(key). Previously, we used i in linear probing or in quadratic probing. Now, we're using i times the second hash value (i* hash2). That's the only difference. Finally, we should apply the modulus

operator to reduce the result. Please watch the video for this section to see double hashing in action.

  As you already know, double hashing is a collision resolution technique used in hash tables, which are data structures that store key-value pairs and provide efficient retrieval of values based on their keys. When two different keys hash to the same index in the hash table (a collision occurs), double hashing provides a way to find an alternative index for the second key.

  Here's a brief explanation of how double hashing works:

  1. Hash Start with two hash functions: hash1(key) and hash2(key). These functions should be designed to produce different hash values for the same key.

  2. Initial Calculate an initial index index by applying the first hash function to the key: index =

  3. Probe If a collision occurs (i.e., the calculated index is already occupied by another key), a probe sequence is used to find an alternative index. The probe sequence is determined by the second hash function:

  4. Double Hashing Use the following algorithm to calculate the next index in the probe sequence:

  nextIndex = (index + i * hash2(key)) % tableSize

  i starts at 0 and increments by 1 with each iteration.

  tableSize is the size of the hash table.

  This algorithm calculates a series of indices to check in the hash table. It starts at the initial index and then iterates using the second hash function until an empty slot is found or the entire table has been searched.

  5. Insertion and When inserting a key-value pair into the hash table, you calculate the initial index and then follow the probe sequence to find the next available slot. When retrieving a value based on a key, you calculate the initial index, and if the key is not found at that index, you follow the same probe sequence to search for it.

  6. Load The efficiency of double hashing depends on the choice of hash functions and the size of the hash table. A key consideration is the load factor, which is the ratio of the number of elements in the hash table to its size. A well-designed double hashing scheme aims to keep the load factor low to minimize collisions and ensure efficient retrieval.

  In this section, you’ve learned that double hashing is a technique for resolving collisions in hash tables by using two hash functions and a probe sequence to find an alternative index when a collision occurs. It offers a way to efficiently handle collisions while maintaining good performance in terms of insertion and retrieval operations, especially when the load factor is kept under control.

  7.14.1. Review of All the Probing Algorithms Theory

  Figure 7.19 shows a review of the probing algorithms we’ve discussed so far.

 

  Figure 7.19: Review of the three probing algorithms

  As you can see in all these algorithms, we start with an initial hash value and then add a number of steps to find an empty slot. In linear probing, we use In quadratic probing, we use i squared, and in double hashing we use i times the second hash.

  7.15. Exercise: Building a Hash Table – 39:37 min.

  Alright, enough theory! Now it's time for an exercise. This exercise is actually an interview question. I want you to implement a hash table from scratch. So, create a hash table class called In this class, we should have a few methods like put(k, v) which takes a key-value pair. We should also have which takes a key and returns a value, and which gets a key and removes the corresponding value from the hash table.

  In this implementation, I want our keys to be integers and our values to be strings. For handling collisions, I want you to use the chaining strategy. So, instead of storing these key-value pairs inside array cells, we want to store them in linked lists.

  Now, let me give you a hint before you get started. What do you think is the type of array that we should use to store these items? Is this going to be an integer array or a string array? Well, actually neither, because we want to store both the key and the value together. This is very important. If we don't store the key, later we cannot handle duplicates.

  So, if we call the put(k, v) method and pass a key-value pair where the key already exists, we're not going to be able to overwrite the value of that key. For this reason, we want to store both the key and the value together in our hash table. So, internally, you should create a private class. You can call it KeyValuePair or or whatever you want to call it. This class wraps a key-value pair, so it has two fields: key and value.

  Now doesn't mean that this array is going to be an entry Entry[ ] array. No, because with this we're going to have a fixed size array where every element can be an entry object. That is not what we want. We want every element in this array to hold a linked list. So here we should have a LinkedList[ ] array. So, our array is going to be such that every element in it is a linked list [LL, LL, LL,

  Right now, as we call the put(k, v) method, we're going to insert items in these linked lists [LL, LL, LL, Now initially, none of these linked lists are initialized. They're all null. We initialize them on demand as we add new items.

  Now, as you know, the linked list class LinkedList[ ] in Java is a generic class. So, here we should specify the type of objects we want to store in this linked list. What are those objects? They're entry objects, LinkedList[ So, LinkedList[ ] is the kind of array that we need for storing items in this hash table.

  Alright, that's pretty much it. Spend 20 to 30 minutes on this exercise. When you're done, go to the next section to see my solution.

  7.16. Solution: put( ) – 42:14 min.

  A good implementation of the put( ) method was already shown in the video. Figures 7.20A to 7.20D show another implementation.

 

  Figure 7.20A: An implementation of the put( ) method – Part A.

 

  Figure 7.20B: An implementation of the put( ) method – Part B.

 

  Figure 7.20C: An implementation of the put( ) method – Part C.

 

  Figure 7.20D: An implementation of the put( ) method – Part D.

  When you run this code you get the following output:

  Value for key 2: Two

  Value for key 2 after removal: null

  Size of the hash table: 2

  The hash table uses an array of linked lists to manage collisions, and each linked list stores key-value pairs using a private class called This HashTable class meets our requirements for storing and managing key-value pairs using chaining to handle collisions.

  7.17. Solution: get( ) – 48:21 min.

  In this section, we're going to implement the get method. Please watch the video to see the implementation.

  7.18. Solution: remove( ) – 52:51 min.

  Implementing the remove method is very similar to the get method. So, we're going to end up with some code duplication, but let's not worry about it yet. We'll come back and refactor our code later. Watch the video right now to see how the remove( ) method is implemented.

  7.19. Solution: Refactoring & Automated Testing– 55:22 min.

  I've seen several examples of me refactoring code in this course, but one thing I forgot to tell you earlier is that refactoring is a creative process. So, if we give the same code to two different developers, they will refactor it differently. Even the same person might refactor the same code in different ways at different times. So, don't take what I've shown you so far as the ultimate or best way to refactor code. These are just some ideas.

  Now in the video of this section, we're going to work on a few different ideas. These are the ideas that I came up with just before recording this video. So, what I'm going to show you might look very neat and clean because I already practiced this before recording this video. That's one thing.

  The other thing is that once we come up with a working solution, before we refactor our code, we should have a bunch of automated tests. Now, automated testing is a completely different topic and is beyond the scope of this course. But the idea of automated testing is to write code to test our code. So, if I refactor the code I’ve written so far without having a bunch of automated tests, it is possible that I might break the solution along the way.

  That is why we need automated tests. So, every time we make a change, we run our tests and our tests will tell us if we have broken anything or

not. With all this introduction, now let's jump in to the video and see how we can refactor the code.

  Now, I've done a lot of big changes in the code. I don't know if I've broken anything or not. Hopefully I haven't. But this is where automated testing come to the rescue. If I had a bunch of automated tests around this hash table class, I could run hundreds of tests in less than a second and they would tell me if I've broken any functionality or not. I don't have to come back to the main class and keep testing it manually with different values. This is the benefit of automated testing.

  7.20. Wrap up – 1:06:26 min.

  Let's quickly recap all the key points you learned about hash tables. We use hash tables to store key value pairs and look them up in constant time. In fact, all hash table operations, such as insert, remove and lookup run in constant time or

  Hash tables, use a hash function to map a key to an index When storing items with different keys, it is possible that the hash function returns the same index. This is what we call collision.

  There are different strategies for handling We can use linked lists at each array cell. This is called or we can store key value pairs directly in the array. But in case of collisions, we probe or search for a new empty slot. This is called open

  Now we have 3 probing algorithms:

  1. Linear probing: we look for adjacent slots. This can cause clusters to form and slow down future insertions or

  2. Quadratic probing: we make big jumps to prevent clusters from forming, but we may end up in an infinite loop.

  3. Double hashing: we use a second hash function to calculate the steps.

 

This brings us to the end of the section. I hope you learned a lot.

  7.21. Coming up Next

  Congratulations on completing the first part of this series. I hope you will buy the next volume of this series. In the next part, we're going to talk about non-linear data structures such as

  Binary trees AVL Trees Heaps Tries and Graphs

  If you enjoyed this course, please support me by telling others about this series. I will really appreciate your support. I hope to see you again in the next part of this series. Thank you very much for taking this course!

  7.22. How to Download Tutorial Videos & Other Resources

  Please use my email address below to request for the link you can use to download for free all the videos, codes, screenshots, practice exercises and solutions, and other training resources I provided in this course. These resources are updated regularly.

  Also, if you need further or if you want to share your codes with me, use my email address below to contact me. I’ll get back to you very quickly (12 hours maximum). I promise.

  Cheers, Bolakale Aremu Ojula Technology Innovations [email protected] www.ojulaweb.com

 

All Books in the Series

  Volume 1: Introduction to Algorithms & Data Structures 1

  Volume 2: Introduction to Algorithms & Data Structures 2

  Volume 3: Introduction to Algorithms & Data Structures 3

  Volume 4: Introduction to Algorithms & Data Structures 4

  Volume 5: Introduction to Algorithms & Data Structures 5

  Volume 6: Introduction to Algorithms & Data Structures 6