Tag Archives: search

The Depth-First-Search Algorithm

I remember when I was younger I used to play the game of hide-and-seek a lot. This is a game where a group of people (at least two) separate into a group of hiders and a group of seekers. The most common version of this that I’ve seen is having one person as the seeker and everyone as hiders. Initially, the seeker(s) is given a number to count towards and close their eyes while counting. The hiders then search for places to hide from the seeker. Once the seeker is finished counting, their job is to find where everyone is hiding or admitting that they cannot find all the seekers. Any seekers not found are said to have won, and seekers that are found are said to have lost.

I played this game a number of times in my childhood, but I remember playing it with a friend named Dennis in particular. Dennis had a certain way he played as seeker. While many of us would simply go to places we deemed as “likely” hiding spots in a somewhat random order, Dennis would always begin by looking in one area of the room, making sure that he had searched through every area connected to that area before going to a new area. He continued this process until he either found everybody or concluded that he had searched every spot he could think of and gave up.

It wasn’t until years later that I was able to note the similarity between Dennis’s way of playing hide-and-seek and the Depth-First-Search algorithm. The Depth-First-Search Algorithm is a way of exploring all the nodes in a graph. Similar to hide-and-seek, one could choose to do this in a number of different ways. Depth-First-Search does this by beginning at some node, looking first at one of the neighbors of that node, then looking at one of the neighbors of this new node. If the new node does not have any new neighbors, then the algorithm goes to the previous node, looks at the next neighbor of this node and continues from there. Initially all nodes are “unmarked” and the algorithm proceeds by marking nodes as being in one of three states: visited nodes are marked as “visited”; nodes that we’ve marked to visit, but have not visited yet are marked “to-visit”; and unmarked nodes that have not been marked or visited are “unvisited”.

Consider a bedroom with the following possible hiding locations: (1) Under Bed, (2) Behind Cabinet, (3) In Closet, (4) Under Clothes, (5) Behind Curtains, (6) Behind Bookshelf, and (7) Under Desk. We can visualize how the bedroom is arranged as a graph and then use a Breadth First Search algorithm to show how Brent would search the room. Consider the following bedroom arrangement, where we have replaced the names of each item by the number corresponding to that item. Node (0) corresponds to the door, which is where Dennis stands and counts while others hide.

Bedroom Items as a Graph

Now consider how a Breadth First Search would be run on this graph.

Bedroom Items as a Graph Colored by DFS

The colors correspond to the order in which nodes are visited in Depth-First-Search.

The way we read this is that initially Dennis would start at node 0, which is colored in Blue.
While Dennis is at node 0, she notices that nodes 1, 5, and 6 (under bed, behind curtains, and behind bookshelf) are the nearby and have not been checked yet so she places them on the “to visit” list.
Next, Dennis will begin to visit each node on the “to visit” list, and when a node is visited, she labels it as visited. At each location, she also takes note of the other locations she can reach from this location. Below is the order of nodes Dennis visits and how he discovers new locations to visit.

Order Visited Node Queue Adding Distance From Node 0
1 0 6,5,1 0
2 6 5,1 7,3,2 1
3 7 3,2,5,1 2
4 3 2,5,1 2
5 2 5,1 4 2
6 4 5,1 3
7 5 1 1
8 1 1

Here is a link to my Examples page that implements the Depth-First-Search Algorithm on Arbitrary Graphs.

The Breadth-First-Search Algorithm

I remember when I was younger I used to play the game of hide-and-seek a lot. This is a game where a group of people (at least two) separate into a group of hiders and a group of seekers. The most common version of this that I’ve seen is having one person as the seeker and everyone as hiders. Initially, the seeker(s) is given a number to count towards and close their eyes while counting. The hiders then search for places to hide from the seeker. Once the seeker is finished counting, their job is to find where everyone is hiding or admitting that they cannot find all the seekers. Any seekers not found are said to have won, and seekers that are found are said to have lost.

I played this game a number of times in my childhood, but I remember playing it with a friend named Brenda in particular. Brenda had a certain way she played as seeker. While many of us would simply go to places we deemed as “likely” hiding spots in a somewhat random order, Brenda would always take a survey of the room, and no matter where she began searching, she would always make note of the locations close to her starting point and make sure she was able to give them all a look before she looked at locations that were close to the points she deemed close to the starting point. She continued this process until she either found everybody or concluded that she had searched every spot she could think of and gave up.

It wasn’t until years later that I was able to note the similarity between Brenda’s way of playing hide-and-seek and the Breadth-First-Search algorithm. The Breadth-First-Search algorithm is a way of exploring all the nodes in a graph. Similarly to hide-and-seek, one could choose to do this in a number of different ways. Breadth-First-Search does this by beginning at some node, looking first at each of the neighbors of the starting node, then looking at each of the neighbors of the neighbors of the starting node, continuing this process until there are no remaining nodes to visit. Initially all nodes are “unmarked” and the algorithm proceeds by marking nodes as being in one of three stages: visited nodes are marked as “visited”; nodes that we’ve marked to visit, but have not visited yet are marked “to-visit”; and unmakred nodes that have not been marked are “unvisited”.

Consider a bedroom with the following possible hiding locations: (1) Under Bed, (2) Behind Cabinet, (3) In Closet, (4) Under Clothes, (5) Behind Curtains, (6) Behind Bookshelf, and (7) Under Desk. We can visualize how the bedroom is arranged as a graph and then use a Breadth First Search algorithm to show how Brenda would search the room. Consider the following bedroom arrangement, where we have replaced the names of each item by the number corresponding to that item. Node (0) corresponds to the door, which is where Brenda stands and counts while others hide.

Bedroom Items as a Graph

Now consider how a Breadth First Search would be run on this graph.

Bedroom Items as a Graph

The colors correspond to the order in which nodes are visited in Breadth-First-Search.

The way we read this is that initially Brenda would start at node 0, which is colored in Blue.
While Brenda is at node 0, she notices that nodes 1, 5, and 6 (under bed, behind curtains, and behind bookshelf) are the nearby and have not been checked yet so she places them on the “to visit” list.
Next, Brenda will begin to visit each node on the “to visit” list, and when a node is visited, she labels it as visited. At each location, she also takes note of the other locations she can reach from this location. Below is the order of nodes Brenda visits and how she discovers new locations to visit.

Order Visited Node Queue Adding Distance From Node 0
1 0 1, 5, 6 0
2 1 5, 6 2, 4 1
3 5 6, 2, 4 1
4 6 2, 4 3, 7 1
5 2 4, 3, 7 2
6 4 3, 7 2
7 3 7 2
8 7 2

Here is a link to my Examples page that implements the Breadth-First-Search Algorithm on Arbitrary Graphs.

The A* Algorithm

As a child I remember traveling on road trips, sitting in the back of a car trying to do my best to keep myself busy for what would occupy the next six to ten hours of my life. One of the things I grew to like were the simple maze books that were sold on the magazine racks at some of the gas stations where we’d stop for food. There are two basic strategies I employed for solving these mazes: For simpler mazes, I could generally just take a look at the overall maze structure, decide upon a path through the maze, then write a path without any mistakes. For more complex mazes though, I would generally begin a route that looks the most promising. If that route reaches a point where I can see that it will be impossible to finish, then I’d go back to where the decision was made, exclude the route I had just tried, and select the “most promising” remaining route. I’d continue this process until I had completed the maze, or until it became simple enough for me to solve the maze using only my memory.

The A* Algorithm works in a similar manner to the second approach I just described. We begin at a starting point, and consider where to move next from that starting point. The set of possible options for this next move is determined by the neighbor function for a given cell. For each neighbor the algorithm estimates the length of the route through that cell by calculating the sum of two values, f(n) = g(n) + h(n), where

g(n) is the (known) cost to travel from the starting point to the cell n.
h(n) is the (approximate) cost to travel from the cell n to the final cell.

The sum g(n) + h(n) allows us to approximate the total cost of a route through the cell n.

The A* algorithm begins at the starting position of the maze. There are two sets we will be considering throughout the process of determining the optimal route, called the closedSet and the openSet. The elements of closedSet are the nodes whose total distance from the starting position has been calculated, whereas the elements of openSet represents nodes whose total distance is still under consideration. There is also a map called prev which is used to reconstruct the path. Below is how the algorithm operates:

A* Algorithm Pseudocode
closedSet is the empty set
openSet = {start}
prev is the empty set

While there are still elements in openSet,
     Find the element c* in openSet with the lowest value f(c).
     If c* is the target position
          Reconstruct the path.
     Else If c* is not the target position,
          Remove c* from openSet
          Add c* to closedSet
          For each neighbor n of c* that is not in closedSet,
               Calculate a temporary g value, temp_g(n) = g(c*) + dist(c*, n).
               If n is not in openSet, or if n is in openSet and temp_g(n) < g(n),                     Set prev(n) = c*                     Set g(n) = temp_g(n)                     Set f(n) = g(n) + h(n).                     if n is not in openSet                          add n to openSet.      End If End While End Algorithm An important question becomes what makes a good heuristic function to approximate the distance to the goal. This can lead to an in depth discussion based on the word "good", but the necessary condition for any heuristic is that it NEVER over-estimates the cost of the path from the cell to the goal. Some examples of possible heuristics for mazes are the Euclidean Distance (the square root of the sum of the squares of the horizontal and vertical differences in distances, i.e. dE(x, y) = (i(xi – yi)2) and TaxiCab Distance (the sum of the differences in the horizontal and vertical dimensions, i.e. dT(x, y) = i|xi – yi|). Both of these are feasible metrics for heuristics on a maze. Other herusitics, like h(n) = 0 for all n are possible, but doing this would make the algorithm treat all cells equally and ignore the heuristic part of the A* algorithm, turning it into Dijkstra’s algorithm.

I’ve written a script that generates random mazes and uses the A* algorithm to find the optimal path through this maze. For this script, I used the taxicab distance heuristic.

Check it out and let me know what you think.

Permutation Problems

I love to play with puzzles. When I was in grade school I would spend hours at a time figuring out ways to solve from things like Tetris, Mindsweeper, Solitare, and Freecell. Later I was introduced to puzzles involving numbers like Sudoku and Nonograms.

These puzzles are often interesting in part because there is generally a very large way that things can be arranged, but only a few of these arrangements are correct. Generally a person solving a puzzle will figure out certain things that must be true or cannot be true, which helps in solving the puzzle and reducing the number of possible cases. Initially, though, we are often left with a situation where we have a new puzzle and our only method is to keep trying every possible solution until we start to notice a pattern (or reach a solution).

For example, consider a Sudoku puzzle. We are given a partially filled in grid and our job is to fill in the remaining cells with the rules that every row, column and marked subgrid must have the numbers 1 – 9 exactly once. One initial attempt at solving such a puzzle could be to attempt to permute the string 1, 2, 3, 4, 5, 6, 7, 8, 9 until we find a solution that fits the first row, then do the same with the second row, and so on and so forth.

One immediate question is how many ways are there to permute the numbers 1, …, 9? We can answer this by realizing that each permutation is a new string. So for each string that we construct, we have 9 choices for the first element in the string. Then once that element has been chosen, we are not allowed for that element to appear anywhere else. So there are only 8 possible choices for what can go in the second string. Continuing this process, we see that the number of possible permutations we can construct from the string 1, …, 9 is

9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1 = 362880

This is a large number of possible strings to generate just to get one row of a Sudoku, so hopefully you’ll be able to notice the pattern before going through this whole set (because once you’ve generated the first row you still have to do the other 8 rows).

Nonetheless because there is often great value that can be gained by knowing how to permute through all possible solutions, I have written three functions that help with this process: Next_Permutation, Previous_Permutation, and Random_Permutation.

Before I give these algorithms, I want to highlight two notations on ordering string. A string (a1, a2, …, an) is said to be in lexicographical order (or alphabetical order) if for each i [in] 1, …, n-1, ai [<=] ai+1. Likewise, a string is said to be in reverse lexicographical order if for each i [in] 1, …, n-1, ai [>=] ai+1.

Next Permutation

If the given string is not in reverse lexicographic order, then there exists two elements j1 and j2 such that j1 [<] j2 and aj1 [<] aj2.
1. The Next_Permutation algorithm first searches for the largest element j1 such that aj1 [<] aj1 + 1. Since we said the string is not in reverse lexicographic order, this j1 must exist.
2. Once this j1 is found, we search for the smallest element aj2 such that aj2 [>] aj1. Again, since we know that this is true for j1 + 1, we know that such a j2 must exist.
3. We swap the elements aj1 and aj2.
4. The elements after j1 + 1 are then placed in lexicographic order (i.e. in order such that ai [>=] ai+1

If the given string is in reverse lexicographic order, then we simply reverse the string.

Previous Permutation

If the given string is not in lexicographic order, then there exists two elements j1 and j2 such that j1 [<] j2 and aj1 [>] aj2.
1. The Previous_Permutation algorithm first searches for the largest element j1 such that aj1 [>] aj1 + 1. Since we said the string is not in reverse lexicographic order, this j1 must exist.
2. Once this j1 is found, we search for the smallest element aj2 such that aj1 [>] aj2. Again, since we know that this is true for j1 + 1, we know that such a j2 must exist.
3. We swap the elements aj1 and aj2.
4. The elements after j1 + 1 are then placed in reverse lexicographic order (i.e. in order such that ai [<=] ai+1

If the given string is in lexicographic order, then we simply reverse the string.

Random Permutation

This generates a random string permutation.

This script can be seen here, set to work on permutations of colors.

Approximating the Set Cover Problem

Set Cover Problem Instance

I just finished my weekly task of shopping for groceries. This can be a somewhat daunting task because I generally have a list of things that I’ll need which cannot all be purchased at a single location. What often happens is that I find that many of the items on my list are ONLY offered at certain stores – generic brands of certain items for example. My goal then changes from minimizing the total amount of money spent to minimizing the number of stores that I must visit to purchase all of my items.

To formulate this as a mathematical problem, suppose that I have a grocery list of items I would like to buy, represented by the lists item1, item2, …, itemn, where n represents the number of items I have on this list. Suppose also that there are stores Store1, Store2, …, Storem (each one distinct) that offer some combination of items I have on my list. What I would like to do is minimize the number of stores I have to visit to purchase these items.

The problem I just described is famous because it is one that many people face on a regular basis. In a more general form, it is so famous that it has a name for it, called the Set Cover Problem (or the Minimum Set Cover Problem). In the general form of this problem, we replace the grocery list with a set of items called our universe. The lists of items offered at each store are the collections of subsets of the universe. In the problem, as in the example above, we would like to select enough subsets from this collection that we are able to obtain every element in our universe. We would like to do this with as low a number of sets as possible.

In my previous post, I described the 21 problems that Karp proved were NP-Complete. Set Cover was one of those problems, showing that this is a hard problem to solve. What I will do is introduce three ways to reach a near-optimal solution relatively quickly.

Greedy Method

One of the first approaches one may take to solve this problem is to repeatedly select the subset that contains the most new items. That’s how the greedy approach to set cover operates. The method knows to terminate when all elements belong to one of the selected sets. In the shopping example above, this would be accomplished by visiting the store that had the most items on my list and purchasing those items at this store. Once this is done, the items that have been purchased can be crossed off my list and we can visit the store with the most items on my remaining list, stopping when the list is empty.

Linear Programming Relaxation

Instead of stating the set cover problem with words, there is a way of describing the situation with mathematical inequalities. For instance, suppose that the soap I like to purchase is only available at stores Store1, Store4 and Store9. Then I could introduce a variable xi for each store i and the requirement that I purchase this soap can be restated as :

x1 + x4 + x9 greater than or equals 1

Because we can either purchase some items or not purchase these items, each variable xi is 0 or 1 (called a binary variable). We can introduce similar constraints for each element in our universe (or on our grocery list). These inequalities (called constraints) have the form:

for each element e in U, sumi | e in Si xi greater than or equals 1

Our goal of minimizing the number of sets chosen (stores visited) can be stated by the objective function:
minimize sum1 less than or equals i less than or equals n xi

So the mathematical formulation for this problem can be stated as

minimize sum1 less than or equals i less than or equals n xi
Subject to
for each element e in U, sumi | e in Si xi greater than or equals 1
for each set i, xi in {0, 1}.

Formulations of this type, where variables are restricted to a finite set (in this case the x variables being either 0 or 1) are called integer programs. Unfortunately, there is no easy way to solve these formulations either. However, there is a related problem which can be solved quickly.

Instead of restricting the x variables to the values of 0 or 1, we could allow them to take on any value within this range, i.e. 0 less than or equals xi less than or equals 1 for each set Si. Doing this converts the problem from an integer programming problem into a linear programming problem (called the LP-Relaxation), which can be solved quickly. The issue with this method though is that the solution obtained by an LP-Relaxation is not guaranteed to be an integer. In this case, how do we interpret the values xi?

Randomized Rounding Method

One approach to dealing with a non-integer solution to the LP-Relaxation is to treat the xi values as probabilities. We can say that xi is the probability that we select set i. This works because each value of xi is in the range of 0 to 1, which is necessary for a probability. We need to repeatedly select sets with their associated probabilities until all elements in our universe are covered. Selecting our sets based on this procedure is the randomized rounding approach.

Deterministic Rounding Method

A second approach to dealing with a non-integer solution to the LP-Relaxation is to base our solution on the most occurring element. If we let f be this frequency (i.e.the number of sets that the most occurring element occurs in), then we can define a solution by selecting set i if the LP=Relaxation solution gives the variable xi a value of at least (1/f).

None of these three approaches is guaranteed to give an optimal solution to an instance of this problem. I will not go into it in this post, but these can all be shown to be within some guaranteed range of the optimal solution, thus making them approximation algorithms.

You can see how the three algorithms compare on random problem instances here.

Hope you enjoy.

The PageRank Algorithm

I think one of the best recent examples of the importance of mathematics is the rise of the search engine Google. I remember the world of search engines before Google and it was dominated by names like AltaVista, Yahoo, WebCrawler, Excite, and the likes. The standard way these search engines ranked the order that pages would be listed on a search query was basically to count the number of times that query appeared on pages in their database. The pages with the most listings were considered the most important, the second most listings were second most important and so on and so forth.

This sounds like a feasible way of doing things but let me show you an example of how this can be tricked. Suppose I wrote my first web page and it looked like the following:

That’s a basic web page that may not garner much attention, and it wouldn’t rank highly in most search engines as no work appears more than once. Suppose that, this being a math web page, I wanted it to rank higher on the query “math”. Then I could just edit the source code of the page to be as follows:

This second page says not much more than the first, but the fact that the word math appears 9 additional times would increase the ranking of this page among math pages. This is a very simple example, but it shows how these search engine rankings did not have a useful metric for determining the important sites on the web.

Enter Google.

The way Google solved this problem of determining the importance of a web page is basically by counting the number of links into a web page – the theory being that the more important a web page is, the more people will be talking about it and thus linking to it. Also, the more important the people talking about (linking to) a web site, the more important that site is. This can be expressed mathematically by the following formula:

In the above formula, the variable d is called the damping factor, which helps to capture some of the random nature of the internet by saying that every site should have at least some minimal worth because of the idea that a random surfer could still get to these sites.

I have written a script to implement the algorithm here.

Other Blogs that have covered this topic
Blue Onion

How Could You Possibly Love/Hate Math?

Growing up, I never really liked math. I saw it as one of those necessary evils of school. People always told me that if I wanted to do well and get into college, I needed to do well in math. So I took the courses required of a high school student, but I remember feeling utter confusion from being in those classes. My key problem was my inquisitive nature. I really didn’t like being “told” that certain things were true in math (I felt this way in most classes). I hated just memorizing stuff, or memorizing it incorrectly, and getting poor grades because I couldn’t regurgitate information precise enough. If this stuff was in fact “true”, I wanted to understand why. It seemed like so much was told to us without any explanation, that its hard to expect anybody to just buy into it. But that’s what teachers expected. And I was sent to the principal’s office a number of times for what they called “disturbing class”, but I’d just call it asking questions.

At the same time, I was taking a debate class. This class was quite the opposite of my math classes, or really any other class I’d ever had. We were introduced to philosophers like Immanuel Kant, John Stuart Mill, Thomas Hobbes, John Rawls, etc. The list goes on and on. We discussed theories, and spoke of how these concepts could be used to support or reject various propositions. Although these philosophies were quite complex, what I loved was the inquiries we were allowed to make into understanding the various positions. Several classmates and I would sit and point out apparent paradoxes in the theories. We’d ask about them and sometimes find that others (more famous than us) had pointed out the same paradoxes and other things that seemed like paradoxes could be resolved with a deeper understanding of the philosophy.

Hate is a strong word, but I remember feeling that mathematicians were inferior to computer programmers because “all math could be programmed”. This was based on the number of formulas I had learned through high school and I remember having a similar feeling through my early years of college. But things changed when I took a course called Set Theory. Last year, I wrote a piece that somewhat describes this change:

They Do Exist!

Let me tell you a story about when I was a kid
See, I was confused and here's what I did.
I said "irrational number, what’s that supposed to mean?
Infinite decimal, no pattern? Nah, can't be what it seems."
So I dismissed them and called the teacher wrong.
Said they can't exist, so let’s move along.
The sad thing is that nobody seemed to mind.
Or maybe they thought showing me was a waste of time.

Then one teacher said "I can prove they exist to you.
Let me tell you about my friend, the square root of two."
I figured it'd be the same ol' same ol', so I said,
"Trying to show me infinity is like making gold from lead"
So he replies, "Suppose you're right, what would that imply?"
And immediately I thought of calling all my teachers lies.
"What if it can be written in lowest terms, say p over q.
Then if we square both sides we get a fraction for two."

He did a little math and showed that p must be even.
Then he asked, "if q is even, will you start believing?"
I stood, amazed by what he was about to do.
But I responded, "but we don't know anything about q"
He says, "but we do know that p squared is a factor of 4.
And that is equal to 2 q squared, like we said before."
Then he divided by two and suddenly we knew something about q.
He had just shown that q must be even too.

Knowing now that the fraction couldn't be in lowest terms
a rational expression for this number cannot be confirmed.
So I shook his hand and called him a good man.
Because for once I yould finally understand
a concept that I had denied all my life,
a concept that had caused me such strife.
And as I walked away from the teacher's midst,
Excited, I called him an alchemist and exhaled "THEY DO EXIST!"

Aside from its lack of poetic content, I think that many mathematicians can relate to this poem, particularly the ones who go into the field for its theoretic principles. For many of us, Set Theory is somewhat of a “back to the basics” course where we learn what math is really about. The focus is no longer on how well you can memorize a formula. Instead, its more of a philosophy course on mathematics – like an introduction to the theory of mathematics, hence the name Set Theory.

The poem above focuses on a particular frustration of mine, irrational numbers. Early on, we’re asked to believe that these numbers exist, but we’re not given any answers as to why they should exist. The same could be said for a number of similar concepts though – basically, whenever a new concept is introduced, there is a reasonable question of how do we know this is true. This is not just a matter of practicality, but a necessity of mathematics. I mean I could say “lets now consider the set of all numbers for which X + 1 = X + 2”, but if this is true for any X, then it means that 1 equals 2, which we know is not true. So the set I’d be referring to is the empty set. We can still talk about it, but that’s the set I’d be talking about.

So why is this concept of answering the why’s of mathematics ignored, sometimes until a student’s college years? This gives students a false impression of what math really is, which leads to people making statements like “I hate math”, not really knowing what math is about.

Learn About Binary Search Trees

Binary Search Tree Script
An Image of My Binary Search Tree Script

I just finished a script that shows the properties of the binary search tree data structure.

These data structures are organized such that the data lies in “nodes” and each node connects directly to up to two new nodes. These new nodes are called the children of the node, and the original node is called the parent. Because there are up to two children, we designate one child as the “left” child, and the other as the “right” child with the properties that the value stored in the left child is less than the value in the parent, which in itself is less than the value of its right child. If a parent has less than two children, then one (or both) of its children are given the value of null.

The insert and delete procedures need to make sure that they keep the elements of a binary search tree in sorted order.
To insert into a BST, we must first find the correct location where the new element will be placed. This means comparing the value of the new element to the current head of the tree, resulting in three possible outcomes.
if the head is null, then insert the new node at the current position because there is no subtree to compare it to.
if the value of the new element is less than the value at the head node, run the insert procedure on the left child of head.
if the value of the new element is greater than the value at the head node, run the insert procedure on the right child of head.

Similarly, the remove procedure for a binary search tree must first find the element to be removed. Once that element is found, there are three cases depending on the type of node we are dealing with.
if the node has no children, then simply remove the node from the tree.
if the node has only one child (either a left child or a right child), then have the parent of the node point to the child of the node (thus bypassing the node itself).
if the node has two children, then we have two options, either replace the node with the minimum value of the right subtree or the maximum value of the left subtree. The nodes that have these minimum and maximum values will have at most one child because by definition a value less than the minimum value in a right subtree would be a left child and thus would be less than the minimum value, contradicting the meaning of a minimum value. Because these nodes have at most one child, we can now use the procedures above to remove these nodes from the tree.
Because a binary search tree is different than a standard array, there are different methods for viewing the its contents. Three common such methods are preorder, inorder, and postorder traversal.
Preorder traversal visits the nodes of a binary search tree in the order (node), (left child), (right child).
Inorder traversal visits the nodes of a binary search tree in the order (left child), (node), (right child).
Postorder traversal visits the nodes of a binary search tree in the order (left child), (right child), (node).

We are also interested in the depth of a tree, which amounts to the amount of layers or levels of the tree. This can be computed by counting the longest path from the root of the tree to a leaf node (a node with no children) in the tree.

Other Blogs that have covered this topic:
Stoimen’s Web Log

Linear Search Algorithm

I have published code that shows examples of the Linear Search Algorithm.

The linear search algorithm iterates through each item in our data structure in search for a specific value. If the current item matches, we can return, else we must continue to the next item.

In the worse case, this requires that we search through all items because in a unsorted structure, we cannot say whether an untesetd value is the value we are searching for.

Other Blogs that have covered this topic:
Dream.In.Code
Coding Bot

Examples of the Binary Search Algorithm

I have published code that shows examples of the Binary Search Algorithm.

In order for this algorithm to be applicable, we need to assume that we’re dealing with a sorted list to start. As a result, instead of proceeding iteratively through each item in the list, the binary search algorithm continually divides the list into two halves and searches each half for the element.

It can be shown that the maximum number of iterations this algorithm requires is equivalent to the number of times that we need to divide the list into halves. This is equivalent to a maximum number of iterations along the order of log2(n), where n is the number of items in the list.