Learning Math through Set Theory

In grade school, we’re taught that math is about numbers. When we get to college (the ones of us who are still interested in math), we’re taught that mathematics is about sets, operations on sets and properties of those sets.

Understanding Set Theory is fundamental to understanding advanced mathematics. Iv wrote these scripts so that users could begin to play with the different set operations that are taught in a basic set theory course. Here, the sets are limited to positive integers and we’re only looking at a few operations, in particular the union, intersection, difference, symmetric difference, and cross product of two sets. I will explain what each of these is below.

The union of the sets S1 and S2 is the set S1 [union] S2, which contains the elements that are in S1 or S2 (or in both).
Note: S1 [union] S2 is the same as S2 [union] S1.

The intersection of the sets S1 and S2 is the set S1 [intersect] S2, which contains the elements that are in BOTH S1 and S2.
Note: S1 [intersect] S2 is the same as S2 [intersect] S1.

The difference between the sets S1 and S2 is the set S1 / S2, which contains the elements that are in S1 and not in S2.
. Note. S1 / S2 IS NOT the same as S2 / S1.
Note. S1 / S2 is the same as S1 [intersect] [not]S2.

The symmetric difference between the sets S1 and S2 is the set S1 [symm diff] S2, which contains the elements that are in S1 and not in S2, or the elements that are in S2 and not in S1.
Note. S1 [symm diff] S2 is the same as S2 [symm diff] S1.
Note. S1 [symm diff] S2 is the same as (S1 [intersect] [not] S2) [union] (S2 [intersect] [not] S1).

The cartesian product of the two sets S1 and S2 is the set of all ordered pairs (a, b), where a [in] S1 and b [in] S2.

Linear Search Algorithm

I have published code that shows examples of the Linear Search Algorithm.

The linear search algorithm iterates through each item in our data structure in search for a specific value. If the current item matches, we can return, else we must continue to the next item.

In the worse case, this requires that we search through all items because in a unsorted structure, we cannot say whether an untesetd value is the value we are searching for.

Other Blogs that have covered this topic:
Dream.In.Code
Coding Bot

Examples of the Binary Search Algorithm

I have published code that shows examples of the Binary Search Algorithm.

In order for this algorithm to be applicable, we need to assume that we’re dealing with a sorted list to start. As a result, instead of proceeding iteratively through each item in the list, the binary search algorithm continually divides the list into two halves and searches each half for the element.

It can be shown that the maximum number of iterations this algorithm requires is equivalent to the number of times that we need to divide the list into halves. This is equivalent to a maximum number of iterations along the order of log2(n), where n is the number of items in the list.

Queue Data Structure

I have just published a program that shows examples of a queue data structure.

This page shows examples of the Queue Data Structure.

Queues operate under a property of First in First Out (FIFO), which is similar to waiting in a line.

The two main operations in a queue are to Enqueue (or insert an element) and Dequeue (or remove an element). Just like waiting in line, when an element is enqueued it is inserted at the back of the queue. And also like waiting in line, when an element is removed from a queue, it is removed from the front of the line.

Stack Data Structure

I have just published a program that shows examples of a stack data structure.

Stacks are an elementary Data Structure where all interaction with the data is done through the looking at the first element.

There are two main operations on Stacks, Push and Pop.

The push operation is used to insert an item into the stack. The name push comes from the fact that when an item is inserted into the stack it is put into the first element in the stack, so we think of it as lying on top of the stack.

Likewise, the pop operation is used to remove an item from the stack. Items are removed from the top of the stack first.

Sorting Algorithms (Take Two)

Sorting is an essential part of our everyday lives. Because of this importance, many people have devoted much time towards improving the performance of different sorting procedures. Previously, I wrote a script comparing many of these algorithms. I decided to break that single script up into several scripts (one for each algorithm) so that the individual algorithms are easier to access and learn more about.

BubbleSort
When I first learned about BubbleSort, I thought of being in elementary school and how my teacher would arrange students in line by increasing height. The way she did this was to have us begin by standing in line (in arbitrary order). Then she would start at the beginning of the line and compare the height of each student to that of the student behind him. If the student in front was taller, then she’d have the two students change places. She continued doing this until she reached the back of the line, taking note of whether or not two students had moved or not. If a swap had been made, she would go back to the beginning of the line and start again.

What my teacher was doing (knowingly or unknowingly) was executing the BubbleSort algorithm. A more formal pseudocode version of this algorithm is listed below:

BubbleSort(A, start, end):
swapped <- false
while (swapped is false)
for (i = start to end)
if (A[i] > A[i+1]) then
temp <- A[i]
A[i] <- A[i+1]
A[i+1] <- temp
swapped <- true
end if
end for
end while
end BubbleSort

Here is an execution of the BubbleSort algorithm.

SelectionSort
One of the more common ways that we do sorting in real life is through something similar to selection sort. Suppose we have a list of graded exams and want to have them in descending order before we pass them back to the students. The idea behind selection sort is to search for the largest element in the list, or in this case the highest test score, move that element to the beginning of the list, and then repeat the same procedure on the remaining portion of the list.

Here is a more formal pseudocode for the SelectionSort algorithm:

SelectionSort(A, start, end):
for (i = start to end)
min <- i
for (j = i+1 to end)
if (A[j] < A[min])
min <- j
end if
end for

temp <- A[i]
A[i] <- A[min]
A[min] <- temp
end for
end SelectionSort

Here is an execution of the SelectionSort algorithm

InsertionSort
If we think of the same set of (unsorted) graded exams as in the SelectionSort description that we wish to have them in descending order before we pass them back to the students, an alternative way to sort the exams is to have more of a “sort-as-you-go” type of a procedure where each test is only compared to an already sorted subset of the exams. The way this is carried out is to divide the exams into two piles. Originally, one pile (call it Pile 1) is empty and the other pile (Pile 2) contains the entire set of exams. We take the exam from the top of Pile 2 and compare it to the exams in Pile 1 and place it in the proper location in this pile (removing it from Pile 1). We continue this until Pile 2 is empty, in which case Pile 1 will have all the exams and the set of exams will be sorted.

Here is the pseudocode for the InsertionSort algorithm:
InsertionSort(A, start, end):
for (i = start to end)
val <- A[i]
j <- i

while (A[j - 1] > val)
A[j] <- A[j-1]
j <- j - 1
end while

A[j] <- val
end for
end InsertionSort

Here is an execution of the InsertionSort algorithm

MergeSort
MergeSort is based on a simple “Divide and Conquer” approach. The principle behind it is simple: We want to sort an entire list, but if this entire list is sorted, then each half of this list must also be sorted. It is an easier problem to sort a smaller list than a larger list, so we sort each of the smaller lists. Once these two lists are sorted, we run a merge algorithm which combines the two smaller lists into one larger list. These smaller lists are then sorted using the same principle until our smaller lists are of size 1 or 2, in which case we can simply swap the two items or return the item itself.

The algorithm itself is similar to the hierarchy at work.
The boss is given a job – “sort these 8 documents in decreasing order”.
Boss divides the list into two sets of size 8 and gets two subordinates and says “hey you two, each of you take one of these lists and sort it and get it back to me”.
Each of subordinates then divide their lists into two distinct lists of size 4 and gets two sub-subordinates to sort these smaller lists.
The sub-subordinates then divide their lists into two distinct lists of size 2 and gets two sub-sub-subordinates to sort these smaller lists.
The sub-sub-subordinates now have lists of size 2 which are easy to sort, if they’re ordered, then they say “its already sorted”; if they’re not ordered, they swap the elements and say “here you go”.
Now that the lists of size 2 have been sorted, the sub-subordinates need to combine them into a list of size 4. This can be done by simply reading both lists and taking the maximum element from each list.
Similarly, the subordinates can just combine the lists of size 4 into lists of size 8 using that same principle.
Finally the boss can take the two lists of size 8 and combine it into a sorted list of size 16 using that same principle.

Here is the pseudocode for the MergeSort Algorithm:
MergeSort(A, start, end)
if ((end – start + 1) = 1)
return A
end if

mid <- (end - start) / 2

left <- {A[start], A[start+1], ... A[mid]}
right <- {A[mid+1], A[mid+2], ... A[end]

left <- MergeSort(left, 0, mid)
right <- MergeSort(right, 0, end-mid)

return Merge(left, right)
end MergeSort

Here is an execution of the MergeSort algorithm

QuickSort
QuickSort is considered the best sorting algorithm among the ones listed here. Like MergeSort, it operates under a divide and conquer principle. Unlike MergeSort, however, the question of where to divide the list (known as the pivot element) and the recursive calls are a bit more complex decisions.

The algorithm works by choosing a pivot element in the list (by default we can let this be the middle element in the list, but there are more complex variations that help decide what this should be), and then reordering the list so that the elements to the left (call this sub-list “small”) of the pivot element are all less than the pivot element, and the elements to the right (call this sub-list “big”) of the pivot element are all greater than the pivot element. So after the swap part of quicksort, the largest element of “small” will be less than the smallest element of “big”. We then need to check on whether the sub-lists “small” and “big” need to be sorted, which is true if we have not considered those regions yet. In such a case, we call the procedure Quicksort on each of the sub-lists.

Here is the pseudocode for the QuickSort Algorithm:
QuickSort(A, left, right)
i <- left
j <- right
pivot <- A[floor((left + right) / 2)]
while (i <= j)
while (A[i] < pivot)
i <- i+1
end while
while (A[j] > pivot)
j <- j-1
end while
if (i <= j)
tmp <- A[i]
A[i] <- A[j]
A[j] <- tmp
i <- i+1
j <- j-1
end if
end while
if (left < j)
A <- QuickSort(A, left, j)
end if
if (i < right)
A <- QuickSort(A, i, right)
end if

return A;
end QuickSort

Here is an execution of the QuickSort algorithm