Category Archives: Examples

These are updates to my examples database.

Set Partition Problems

This is my first post of 2019 and my first post in a while. There was one posted a few months ago, but not really geared towards the algorithms and learning focus of the site. I have been doing a lot of coding in my spare time, but honestly life has just gotten in the way. Its not a bad thing, but life is life and sometimes I have to prioritize the things. In particular, I have been having ideas and actually coding things up but the time it takes to clean up code, write a blog entry and finding a nice way to visualize these things has been something that I haven’t been able to really focus on as much as I’ve wanted to.

That said, I want to talk to you about the Set Partition Problem today. You can go to Wikipedia to get more information about this problem, but I will give you a brief introduction to it and then talk about two different approaches to it. The problem assumes that we are given as input a (multi)-set S. The reason we say it is a multi-set and not a simple set is because we can have the same element appear multiple times in the set. So if S1 = {2} and S2 = {2, 2}, then although as sets they are both equal to the set {2} = S1, as multi-sets allow for multiple instances of an element. The elements of S are assumed to be positive integers.

So given this multi-set S, we ask the question of can the elements of S be divided into two smaller multi-sets, C1 and C2 where

  • C1 [union] C2 = S
  • C1 [intersect] C2 = [empty set]
  • [Sigma]_[x in C1] = [Sigma]_[x in C2].C1 [union] C2 = S, C1 [intersect] C2 = [empty set], and [Sigma]_[x in C1] = [Sigma]_[x in C2]

The first two bullets above say that the sets C1 and C2 form a partition of S. The third bullet says that the sums of the elements in the two children multi-sets are equal.

This problem is known to be NP Complete. This means that it is one of the more difficult decision problems. Because of this finding an algorithm that solves this problem exactly will generally take a long running time. And finding an algorithm that runs quickly will more than likely be incorrect in some instances.

I will show you two approaches to this problem. One is based on Dynamic Programming (DP) and one is based on Greedy Algorithms. The DP version solves the problem exactly but has a slow running time. The greedy algorithm is fast but is not guaranteed to always give the correct answer.

Dynamic Programming is based on the principle of optimality. This says that in order to have a correct solution to the overall problem, we need optimal solutions to all subproblems of this problem. This is done by keeping track of a table which can be used for looking up these subproblems and the optimal solutions to these subproblems.

For this problem, we will build a table where the rows represent the final sums and columns represent subsets of the given multi-set (containing the first 0…n elements). The question we are repeatedly asking is “can we find a subset of the set in this column whose sum is exactly the given rowsum?” Here is an example of the DP algorithm on the multi-set {5, 6, 5, 6, 7}.

Greedy Algorithms are generally based on sorting the elements based on some principle and using that to try to answer the underlying question. The main problem with this approach is that it is very short sided because they do not look at the overall picture. Such algorithms are known for finding local optima that are not always globally optimal. The benefit to these problems though is that they are generally easier to code and easier to understand.

For the Set Partition Problem , the Greedy approach is to sort elements in descending order. Once this is done, the goal is to keep two subsets while iterating through the array, adding the element to the smaller of the two sets whenever possible.

For more examples, check out Set Partition Problems

Binary Puzzles

As you can probably tell, I’m a big fan of puzzles. On one hand you can say that a good puzzle is nothing but particular instance of a complex problem that we’re being asked to solve. What exactly makes a problem complex though?

To a large extent that depends on the person playing the puzzles. Different puzzles are based on different concepts and meant to highlight different concepts. Some puzzles really focus on dynamic programming like the Triangle Sum Puzzles or the Unidirectional TSP Puzzles.

Other puzzles are based on more complicated problems, in many cases instances of NP-complete problems. Unlike the puzzles mentioned above, there is generally no known optimal strategy for solving these puzzles quickly. Some basic examples of these are ones like Independent Set Puzzles, which just give a random (small) instance of the problem and ask users to solve it. Most approaches involve simply using logical deduction to reduce the number of possible choices until a “guess” must be made and then implementing some form of backtracking solution (which is not guessing since you can form a logical conclusion that if the guess you made were true, you reach either (a) a violation of the rules or (b) a completed puzzle).

One day a few months back i was driving home from work and traffic was so bad that i decided to stop at the store. While browsing the books, I noticed a puzzle collection. Among the puzzles I found in that book were the Range Puzzles I posted about earlier. However I also found binary puzzles.

Filled Binary puzzles are based on three simple rules
1. No the adjacent cells in any row or column can contain the same value (so no 000 or 111 in any row or column).
2. Every row must have the same number of zeros and ones.
3. Each row and column must be unique.

There is a paper from 2013 stating that Binary Puzzles are NP Complete. There is another paper that discusses strategies involved in Solving a Binary Puzzle

Once I finished the puzzles in that book the question quickly became (as it always does) where can I get more. I began writing a generator for these puzzles and finished it earlier this year. Now i want to share it with you. You can visit the examples section to play those games at Binary Puzzles.

Below I will go over a sample puzzle and how I go about solving it. First lets look at a 6 by 6 puzzle with some hints given:

  0 1      
0   1 0    
1 1   0    
  1     0  
      0    
1     1   0

We look at this table and can first look for locations where we have a “forced move”. An obvious choice for these moves wold be three adjacent cells in the same row or column where two have the same value. A second choice is that when we see that a row or column has the correct number of zeros or ones, the remaining cells in that row or column must have the opposite value.

So in the above puzzle, we can see that the value in cells (2, 2) and (2, 5) must also be a 0 because cells (2, 3) and (2, 4) are both 1. Now we see that column 2 has 5 of its 6 necessary values, and three 0’s. So the last value in this column (2, 6) must be a 1 in order for there to be an equal number of 0s and 1s.

For some easier puzzles these first two move types will get you far enough to completely fill in all the cells. For more advanced puzzles though, this may require a little more thorough analysis. 

As always, check it out and let me know what you think. 

Range Puzzles

Range Puzzles

I have always liked puzzles. I really enjoy discovering a new puzzle. Sometimes when I have discovered a new puzzle, I enjoy it so much that I can’t seem to do enough of them. In such cases, I quickly run out of these puzzles to do and find myself looking for either a new puzzle or a way to generate them on my own.

Such was the case when I discovered the “Range Puzzles”. The rules are simple:

Every cell is marked either blue or gray
No two gray cells can be next to one another
The grid must be a connected (i.e. there is always a path from every cell to every other cell using horizontal and vertical connections.
Some cells have a number inside. This indicates the number of cells that can be viewed (horizontally and vertically in both directions) by this cell, including the cell itself.

To try out these puzzles, visit Range Puzzles at LEARNINGlover.com

Floyd-Warshall Shortest Paths

The Floyd Warshall algorithm is an all pairs shortest paths algorithm. This can be contrasted with algorithms like Dijkstra’s which give the shortest paths from a single node to all other nodes in the graph.

Floyd Warshall’s algorithm works by considering first the edge set of the graph. This is the set of all paths of the graph through one edge. Node pairs that are connected to one another through an edge will have their shortest path set to the length of that edge, while all other node pairs will have their shortest path set to infinity. The program then runs through every triplet of nodes (i, j, k) and checks if the path from i to k and the path from k to j is shorter than the current path from i to j. If so, then the distance and the path is updated.

So lets consider an example on the graph in the image above. The edge set of this graph is E = {(0, 1), (0, 2), (0, 3), (1, 3), (3, 4)}. So our initial table is:

  0 1 2 3 4
0 inf (0, 1) (0, 2) (0, 3) inf
1 (0, 1) inf inf (1, 3) inf
2 (0, 2) inf inf inf inf
3 (0, 3) (1, 3) inf inf (3, 4)
4 inf inf inf (3, 4) inf

As we look to update the paths, we first look for routes that go through node 0:

Because node 0 connects to both node 1 and node 2, but node 1 does not connect to node 2, we have the following truth holding in the matrix above:
cost(0, 1) + cost(0, 2) < cost(1, 2), so we can update the shortest path from node 1 to node 2 to be (1, 0, 2).

Because node 0 connects to both node 2 and node 3, but node 2 does not connect to node 3, we have the following truth holding in the matrix above:
cost(0, 2) + cost(0, 3) < cost(2, 3), so we can update the shortest path from node 2 to node 3 to be (2, 0, 3).

Because node 3 connects to both node 0 and node 4, but node 0 does not connect to node 4, we have the following truth holding in the matrix above:
cost(0, 3) + cost(3, 4) < cost(0, 4), so we can update the shortest path from node 0 to node 4 to be (0, 3, 4).

Because node 3 connects to both node 1 and node 4, but node 1 does not connect to node 4, we have the following truth holding in the matrix above:
cost(1, 3) + cost(3, 4) < cost(1, 4), so we can update the shortest path from node 1 to node 4 to be (1, 3, 4).

Because node 3 connects to both node 2 and node 4, but node 2 does not connect to node 4, we have the following truth now holding:
cost(2, 3) + cost(3, 4) < cost(2, 4), so we can update the shortest path from node 2 to node 4 to be (2, 0, 3, 4).

The final table giving the list of shortest paths from every node to every other node is given below.

  0 1 2 3 4
0 inf (0, 1) (0, 2) (0, 3) (0, 3, 4)
1 (0, 1) inf (1, 0, 2) (1, 3) (1, 3, 4)
2 (0, 2) (1, 0, 2) inf (2, 0, 3) (2, 0, 3, 4)
3 (0, 3) (1, 3) (2, 0, 3) inf (3, 4)
4 (0, 3, 4) (1, 3, 4) (2, 0, 3, 4) (3, 4) inf

To see more examples and to help answer questions, check out the script in my examples section on the Floyd-Warshall algorithm

Degree Centrality of a Graph

Degree Centrality Example

I wanted to spend some time on centrality measures of a graph. These are measurements of how important each node (or edge) is to the overall graph. But how do we define, or determine, importance? There is no unique way to answer this question, so there are varying metrics for measuring centrality. Which one you choose depends on several factors including how many other nodes of the graph are included, as well as the run time of the metrics you’re considering.

I have just published a script focusing on the degree centrality metric. The degree centrality metric is called a “walk metric” because it determines how important a node is by how many other nodes that can be reached by walks of up to a certain length. Lets look at the definition of the degree of a node to see if we can understand why it is called a walk metric.

In an undirected graph G = (V, E), the degree of a node u [in] V is the |{v | (u, v) [in] E}|. This is the size of the set of nodes that are connected to node u via a single edge. Another way of describing a single edge is a walk of length one. So the degree metric measures the importance of a node by the number of unique walks of length one.

The normalized degree centrality of a node v in a graph G = (V,E) measures how many nodes are connected to the node v, compared to the maximum possible number of edges that can be connected to this node. Because we are dealing with simple undirected graphs (at most a single edge between any two distinct vertices), this maximum possible number will always be |V – 1|. So the normalized degree can be calculated by dividing the degree of the node (the number of nodes it is connected to) by |V – 1|.

So for the example above, the node 0 has degree 6 because it is connected to nodes 2, 5, 9, 10, 11, and 12. There are 15 total nodes in this graph, so to calculate the normalized degree centrality of the node 0, it will be 6 / 14, which rounds to 0.428571.

To see more examples and to help answer questions, check out the script in my examples section on degree centrality

Tarjan’s Strongly Connected Components Algorithm

I just added a program that finds the strongly connected components of a graph using Tarjan’s Algorithm.

A strongly connected component of a graph is a subgraph S of G where every pair of nodes, u and v in S there is a path from u to v and a path from v to u.

To find these strongly connected components we implement Tarjan’s algorithm. The idea behind Tarjan’s algorithm is to begin by running a depth first search from an arbitrary node in the graph, labeling nodes reachable from this start node in the order they are reached. The algorithm is also interested in the “oldest” node that could be reached by a given node. This is indicated by the keeping track of the lowest label that can be reached from that node. We will call the first property label(v) and the second lowlink(v).

When the algorithm starts label(v) is the same as lowlink(v) whenever a node is discovered. As the algorithm is executed, the DFS is being run on each discovered node, which in turn updates the lowlink(v) property telling of (older) nodes that can be reached. If an older node can be reached, then we update lowlink. If we reach a node that cannot connect to any older nodes after the DFS call, i.e if label(v) is the same a lowlink(v), then this means that this node does not have a path to any node with a lower label. So this node will be the first node of a new strongly connected component.

Feel free to check it out an let me know what you think in the comments below.

Longest Common Subsequence

Suppose you and I each had an ordered list of items and we were interested in comparing how similar those lists are. One calculation we can perform on these two strings is the Longest Common Subsequence. A sequence X is an ordered list of elements <x1, …, xn>. A subsequence Z is another sequence where (1) Each element of Z is also an element of X and (2) The elements of Z occur in the same order (in Z) as they do in X.

Note that we do not say that the elements of Z need to be a continuous block of elements. If this were true we would be defining a substring. So as an example, suppose we have as an initial string,
X = C, O, M, P, U, T, E, R. 
Then the following are all subsequences: 
Z1 = C, M, U, T, R
Z2 = C, O, M, P
Z3 = U, T, E, R
Z4 = O, P, T, E

I will note that Z2 and Z3 are also substrings since they contain continuous sets of characters. 

The length of a substring is simply the number of characters it contains. So X has length 8, Z1 has length 5, Z2, Z3 and Z4 have length 4. 

Suppose now that we had a second string, Y = P, R, O, G, R, A, M, M, E, R and are interested in the longest common subsequence between the two. We can do that by observing that there is a bit of recursion going on with this question. What I mean by that is that asking the question of “What is the longest common subsequence between X and Y” is the same as asking “What is the longest common subsequence between X and Y once we have seen 8 characters of X and 10 characters of Y”

There are three possible ways to answer this question. 

If X<sub>8</sub> equals Y<sub>10</sub>, then we ask the same question about X<sub>7 and Y<sub>9</sub> and add 1 to the answer. 
If X<sub>8</sub> is not equal to Y<sub>10</sub>, then the answer to this will be the same as the maximum of the pair X<sub>7</sub>, Y<sub>10</sub> and the pair X<sub>8</sub>, Y<sub>9</sub>. 
If we reach a situation where we reach the beginning of either string, we are forced to answer 0 to that question. 

Then the function has the following look: 

LCS(Xi, Yj) =
0, if i is 0 or j is 0
1 + LCS(Xi-1, Yj-1) if Xi equals Yj
max(LCS(Xi-1, Yj), LCS(Xi, Yj-1))

Below is a table showing how we would solve the problem mentioned.

The strategy used to devise this concept is called dynamic programming. It is useful we can solve larger problems by solving overlapping subproblems, as was the case here. In this situation we generally can store the data in a table form and avoid re-solving subproblems for which many larger problems will be dependent.

You can see this algorithm in action at LEARNINGlover.com: Longest Common Subsequece. Let me know what you think.

Independent Set Puzzles

In this post, I want to return to the idea of NP-Complete problems. There is a more technical, more formal definition that I can refer you to, but I like to refer to the images from Garey and Johnson’s “Computers and Intractability: A Guide to the Theory of NP Completeness”. The images helps to understand the difficulty of NP-Complete problems by presenting two images. The first image shows a single individual speaking to someone saying that he has been unable to solve the problem. The second image shows that same individual speaking to the same person behind the desk, but saying that not only was he unable to solve the problem but neither was a long line of people. The theory of NP Complete problems revolves around the concept that if an efficient algorithm exists for an NP-Complete problem, then an efficient algorithm exists for all problems in the class NP.

Today, I would like to present a puzzle I created to play with the Independent Set problem. This is the problem where we are given a graph G = (V, E) and are asked to find a maximum set of vertices S such that there is no edge in the graph G between any two vertices in S. The decision version (the problem asking whether there is an independent set of size k) of this problem is NP-Complete, so the known algorithms for problem either have a slow running time, or do not solve it exactly.

This problem is very related to another puzzle I posted last year called the clique problem. In fact, Karp originally proved that Clique was NP-Complete by showing that if Clique could be solved efficiently, then Independent Set could be solved efficiently. He did this by constructing a second graph [G bar], called the compliment of G (containing the same vertices in G, along with the edges that are not present in G. Any edge present in G will not be present in ). Then he showed that the nodes representing a maximum clique in G would represent a maximum independent set in . He had already shown that Independent Set was NP-Complete, which meant that both Independent Set and Clique were among the most difficult problems within the class known as NP.

The puzzle begins with an undirected graph and asks users to find a maximum independent set. Users should click on the numbers in the table below the graph indicating the nodes they wish to select in their independent set (purple indicates that the node is selected, gray indicates that it is not). Once a user have a potential solution, they can press the “Check” button to see if their solution is optimal. If a user is having trouble and simply wishes to see the maximum independent set, they can press the “Solve” button. And to generate a new problem, users can press the “New Problem” button.As a result of this relationship between the Clique problem and the Independent Set problem, the Bron-Kerbosch algorithm that was used to find maximum cliques previously can also be used here.

Lets Learn About XOR Encryption

One of the more common things about this generation is the constant desire to write up (type) their thoughts. So many of the conversations from my high school days were long lasting, but quickly forgotten. Today’s generation is much more likely to blog, tweet, write status updates or simply open up a notepad file and write up their thoughts after such a conversation.

When we feel that our thoughts are not ready for public eyes (maybe you want to run your idea by the Patent and Trademark Office before speaking about it) we may seek some form of security to ensure that they stay private. An old fashioned way of doing this was to write in a diary and enclosed it within a lock and key. The mathematical field of encryption also tries to grant privacy by encoding messages so that only people with the necessary information can read them.

The type of encryption I want to speak about today is called XOR encryption. It is based on the logical operation called “exclusive or” (hence the name XOR). The exclusive or operation is true between two logical statements if exactly one of the two statements is true, but not both statement. This can be represented with the following truth table

Input 1 Input2 XOR Result
T T F
T F T
F T T
F F F

XOR Encryption is particularly useful in this day and age because we every character we type is understood by the computer as a sequence of zeros and ones. The current standard encoding that is used is Unicode (also known as UTF-8). Under this encoding the letter ‘a’ is represented as the binary string ‘01100001’. Similarly every letter, number and special character can be represented as its own binary string. These binary strings are just an assignment of numbers to these characters so that we can to help represent them in the computer. The numbers can the be thought of in base 10, which is how we generally think about numbers, or in base 2 which is how computers generally work with numbers (or a number of other ways). The way we would use these binary strings in encoding is first by translating a text from human-readable text to machine readable text via its binary string. For example, the word “Invincible”, we would get the following binary strings:

Letter Unicode in base 10 Unicode in base 2
I 73 01001001
n 110 01101110
v 118 01110110
i 105 01101001
n 110 01101110
c 99 01100011
i 105 01101001
b 98 01100010
l 108 01101100
e 101 01100101

To encrypt the message we need a key to encode the message and will simply perform an XOR operation on the key and every character in the string. Similarly, do decrypt the message we perform XOR operation on the key and every character in the encoded message. This means that the key (much like a normal key to a diary) must be kept private and only those whom the message is to be shared between have access to it.

Here is a link to the script where you can check out XOR Encrpytio. Try it out and let me know what you think.

Discrete-time Markov Chains

Much of how we interact with life could be described as transitions between states. These states could be weather conditions (whether we are in a state of “sunny” or “rainy”), the places we may visit (maybe “school”, “the mall”, “the park” and “home”), our moods (“happy”, “angry”, “sad”). There are a number of other ways to model states and even the possibility of infinitely many states.

Markov Chains are based on the principle that the future is only dependent on the immediate past. so for example, if I wished to predict tomorrow’s weather using a Markov Chain, I would need to only look at the weather for today, and can ignore all previous data. I would then compare the state of weather for today with historically how weather has changed in between states to determine the most likely next state (i.e what the weather will be like tomorrow). This greatly simplifies the construction of models.

To use Markov Chains to predict the future, we first need to compute a transition matrix which shows the probability (or frequency) that we will travel from one state to another based on how often we have done so historically. This transition matrix can be calculated by looking at each element of the history as an instance of a discrete state, counting the number of times each transition occurs and dividing each result by the number of times the origin state occurs. I’ll next give an example and then I’ll focus on explaining the Finite Discrete State Markov Chain tool I built using javascript.

Next, I want to consider an example of using Markov Chains to predict the weather for tomorrow. Suppose that we have observed the weather for the last two weeks. We could then use that data to build a model to predict tomorrow’s weather. To do this, lets first consider some states of weather. Suppose that a day can be classified in one of four different ways: {Sunny, Cloudy, Windy, Rainy}. Further, suppose that over the last two weeks we have seen the following pattern.

Day 1 Sunny
Day 2 Sunny
Day 3 Cloudy
Day 4 Rain
Day 5 Sunny
Day 6 Windy
Day 7 Rain
Day 8 Windy
Day 9 Rain
Day 10 Cloudy
Day 11 Windy
Day 12 Windy
Day 13 Windy
Day 14 Cloudy

We can look at this data and calculate the probability that we will transition from each state to each other state, which we see below:

Rain Cloudy Windy Sunny
Rain 0 1/3 1/3 1/3
Cloudy 1/2 0 1/2 0
Windy 2/5 1/5 2/5 0
Sunny 0 1/3 1/3 1/3

Given that the weather for today is cloudy, we can look at the transition matrix and see that historically the days that followed a cloudy day have been Rainy and Windy days each with probability of 1/5. We can see this more mathematically by multiplying the current state vector (cloudy) [0, 1, 0, 0] by the above matrix, where we obtain the result [1/2, 0, 1/2, 0].

In similar fashion, we could use this transition matrix (lets call it T) to predict the weather a number of days in the future by looking at Tn. For example, if we wanted to predict the weather two days in the future, we could begin with the state vector [1/2, 0, 1/2, 0] and multiply it by the matrix T to obtain [1/5, 4/15, 11/30, 1/6].

We can also obtain this by looknig at the original state vector [0, 1, 0, 0] and multiplying it by T2.

T2 =

1

3/10 8/45 37/90 1/9
1/5 4/15 11/30 1/6
13/50 16/75 59/150 2/15
3/10 8/45 37/90 1/9

When we multiply the original state vector by T2 we arrive at this same answer [1/5, 4/15, 11/30, 1/6]. This matrix T2 has an important property in that every state can reach every other state.

In general, if we have a transition matrix where for every cell in row i and column j, there is some power of the transition matrix such that the cell (i, j) in that matrx is nonzero, then we say that every state is reachable from every other state and we call the Markov Chain regular.

Regular Markov Chains are important because they converge to what’s called a steady state. These are state vectrs x = [x0, …, xn] such that xTn = x for very large values of n. The steady state tells us how the Markov Chain will perform over long periods of time. We can use algebra and systems of linear equations to solve for this steady state vector.

For the Javascript program I’ve written, I have generated a set of painting samples for a fictional artist. The states are the different colors and the transitions are the colors that the artist will use after other colors. as well as the starting and ending colors. Given this input, we can form a Markov Chain to understand the artist’s behavior. This Markov Chain can then be used to solve for the steady state vector or to generate random paintings according to the artist’s profile. Be sure to check it out and let me know what you think.