Tag Archives: computer science

Independent Set Puzzles

In this post, I want to return to the idea of NP-Complete problems. There is a more technical, more formal definition that I can refer you to, but I like to refer to the images from Garey and Johnson’s “Computers and Intractability: A Guide to the Theory of NP Completeness”. The images helps to understand the difficulty of NP-Complete problems by presenting two images. The first image shows a single individual speaking to someone saying that he has been unable to solve the problem. The second image shows that same individual speaking to the same person behind the desk, but saying that not only was he unable to solve the problem but neither was a long line of people. The theory of NP Complete problems revolves around the concept that if an efficient algorithm exists for an NP-Complete problem, then an efficient algorithm exists for all problems in the class NP.

Today, I would like to present a puzzle I created to play with the Independent Set problem. This is the problem where we are given a graph G = (V, E) and are asked to find a maximum set of vertices S such that there is no edge in the graph G between any two vertices in S. The decision version (the problem asking whether there is an independent set of size k) of this problem is NP-Complete, so the known algorithms for problem either have a slow running time, or do not solve it exactly.

This problem is very related to another puzzle I posted last year called the clique problem. In fact, Karp originally proved that Clique was NP-Complete by showing that if Clique could be solved efficiently, then Independent Set could be solved efficiently. He did this by constructing a second graph [G bar], called the compliment of G (containing the same vertices in G, along with the edges that are not present in G. Any edge present in G will not be present in ). Then he showed that the nodes representing a maximum clique in G would represent a maximum independent set in . He had already shown that Independent Set was NP-Complete, which meant that both Independent Set and Clique were among the most difficult problems within the class known as NP.

The puzzle begins with an undirected graph and asks users to find a maximum independent set. Users should click on the numbers in the table below the graph indicating the nodes they wish to select in their independent set (purple indicates that the node is selected, gray indicates that it is not). Once a user have a potential solution, they can press the “Check” button to see if their solution is optimal. If a user is having trouble and simply wishes to see the maximum independent set, they can press the “Solve” button. And to generate a new problem, users can press the “New Problem” button.As a result of this relationship between the Clique problem and the Independent Set problem, the Bron-Kerbosch algorithm that was used to find maximum cliques previously can also be used here.

Lets Learn About XOR Encryption

One of the more common things about this generation is the constant desire to write up (type) their thoughts. So many of the conversations from my high school days were long lasting, but quickly forgotten. Today’s generation is much more likely to blog, tweet, write status updates or simply open up a notepad file and write up their thoughts after such a conversation.

When we feel that our thoughts are not ready for public eyes (maybe you want to run your idea by the Patent and Trademark Office before speaking about it) we may seek some form of security to ensure that they stay private. An old fashioned way of doing this was to write in a diary and enclosed it within a lock and key. The mathematical field of encryption also tries to grant privacy by encoding messages so that only people with the necessary information can read them.

The type of encryption I want to speak about today is called XOR encryption. It is based on the logical operation called “exclusive or” (hence the name XOR). The exclusive or operation is true between two logical statements if exactly one of the two statements is true, but not both statement. This can be represented with the following truth table

Input 1Input2XOR Result
TTF
TFT
FTT
FFF

XOR Encryption is particularly useful in this day and age because we every character we type is understood by the computer as a sequence of zeros and ones. The current standard encoding that is used is Unicode (also known as UTF-8). Under this encoding the letter ‘a’ is represented as the binary string ‘01100001’. Similarly every letter, number and special character can be represented as its own binary string. These binary strings are just an assignment of numbers to these characters so that we can to help represent them in the computer. The numbers can the be thought of in base 10, which is how we generally think about numbers, or in base 2 which is how computers generally work with numbers (or a number of other ways). The way we would use these binary strings in encoding is first by translating a text from human-readable text to machine readable text via its binary string. For example, the word “Invincible”, we would get the following binary strings:

LetterUnicode in base 10Unicode in base 2
I7301001001
n11001101110
v11801110110
i10501101001
n11001101110
c9901100011
i10501101001
b9801100010
l10801101100
e10101100101

To encrypt the message we need a key to encode the message and will simply perform an XOR operation on the key and every character in the string. Similarly, do decrypt the message we perform XOR operation on the key and every character in the encoded message. This means that the key (much like a normal key to a diary) must be kept private and only those whom the message is to be shared between have access to it.

Here is a link to the script where you can check out XOR Encrpytio. Try it out and let me know what you think.

Discrete-time Markov Chains

Much of how we interact with life could be described as transitions between states. These states could be weather conditions (whether we are in a state of “sunny” or “rainy”), the places we may visit (maybe “school”, “the mall”, “the park” and “home”), our moods (“happy”, “angry”, “sad”). There are a number of other ways to model states and even the possibility of infinitely many states.

Markov Chains are based on the principle that the future is only dependent on the immediate past. so for example, if I wished to predict tomorrow’s weather using a Markov Chain, I would need to only look at the weather for today, and can ignore all previous data. I would then compare the state of weather for today with historically how weather has changed in between states to determine the most likely next state (i.e what the weather will be like tomorrow). This greatly simplifies the construction of models.

To use Markov Chains to predict the future, we first need to compute a transition matrix which shows the probability (or frequency) that we will travel from one state to another based on how often we have done so historically. This transition matrix can be calculated by looking at each element of the history as an instance of a discrete state, counting the number of times each transition occurs and dividing each result by the number of times the origin state occurs. I’ll next give an example and then I’ll focus on explaining the Finite Discrete State Markov Chain tool I built using javascript.

Next, I want to consider an example of using Markov Chains to predict the weather for tomorrow. Suppose that we have observed the weather for the last two weeks. We could then use that data to build a model to predict tomorrow’s weather. To do this, lets first consider some states of weather. Suppose that a day can be classified in one of four different ways: {Sunny, Cloudy, Windy, Rainy}. Further, suppose that over the last two weeks we have seen the following pattern.

Day 1Sunny
Day 2Sunny
Day 3Cloudy
Day 4Rain
Day 5Sunny
Day 6Windy
Day 7Rain
Day 8Windy
Day 9Rain
Day 10Cloudy
Day 11Windy
Day 12Windy
Day 13Windy
Day 14Cloudy

We can look at this data and calculate the probability that we will transition from each state to each other state, which we see below:

RainCloudyWindySunny
Rain 0 1/3 1/3 1/3
Cloudy 1/2 0 1/2 0
Windy 2/5 1/5 2/5 0
Sunny 0 1/3 1/3 1/3

Given that the weather for today is cloudy, we can look at the transition matrix and see that historically the days that followed a cloudy day have been Rainy and Windy days each with probability of 1/5. We can see this more mathematically by multiplying the current state vector (cloudy) [0, 1, 0, 0] by the above matrix, where we obtain the result [1/2, 0, 1/2, 0].

In similar fashion, we could use this transition matrix (lets call it T) to predict the weather a number of days in the future by looking at Tn. For example, if we wanted to predict the weather two days in the future, we could begin with the state vector [1/2, 0, 1/2, 0] and multiply it by the matrix T to obtain [1/5, 4/15, 11/30, 1/6].

We can also obtain this by looknig at the original state vector [0, 1, 0, 0] and multiplying it by T2.

T2 =

1

3/108/4537/901/9
1/54/1511/301/6
13/5016/7559/1502/15
3/108/4537/901/9

When we multiply the original state vector by T2 we arrive at this same answer [1/5, 4/15, 11/30, 1/6]. This matrix T2 has an important property in that every state can reach every other state.

In general, if we have a transition matrix where for every cell in row i and column j, there is some power of the transition matrix such that the cell (i, j) in that matrx is nonzero, then we say that every state is reachable from every other state and we call the Markov Chain regular.

Regular Markov Chains are important because they converge to what’s called a steady state. These are state vectrs x = [x0, …, xn] such that xTn = x for very large values of n. The steady state tells us how the Markov Chain will perform over long periods of time. We can use algebra and systems of linear equations to solve for this steady state vector.

For the Javascript program I’ve written, I have generated a set of painting samples for a fictional artist. The states are the different colors and the transitions are the colors that the artist will use after other colors. as well as the starting and ending colors. Given this input, we can form a Markov Chain to understand the artist’s behavior. This Markov Chain can then be used to solve for the steady state vector or to generate random paintings according to the artist’s profile. Be sure to check it out and let me know what you think.

The Depth-First-Search Algorithm

I remember when I was younger I used to play the game of hide-and-seek a lot. This is a game where a group of people (at least two) separate into a group of hiders and a group of seekers. The most common version of this that I’ve seen is having one person as the seeker and everyone as hiders. Initially, the seeker(s) is given a number to count towards and close their eyes while counting. The hiders then search for places to hide from the seeker. Once the seeker is finished counting, their job is to find where everyone is hiding or admitting that they cannot find all the seekers. Any seekers not found are said to have won, and seekers that are found are said to have lost.

I played this game a number of times in my childhood, but I remember playing it with a friend named Dennis in particular. Dennis had a certain way he played as seeker. While many of us would simply go to places we deemed as “likely” hiding spots in a somewhat random order, Dennis would always begin by looking in one area of the room, making sure that he had searched through every area connected to that area before going to a new area. He continued this process until he either found everybody or concluded that he had searched every spot he could think of and gave up.

It wasn’t until years later that I was able to note the similarity between Dennis’s way of playing hide-and-seek and the Depth-First-Search algorithm. The Depth-First-Search Algorithm is a way of exploring all the nodes in a graph. Similar to hide-and-seek, one could choose to do this in a number of different ways. Depth-First-Search does this by beginning at some node, looking first at one of the neighbors of that node, then looking at one of the neighbors of this new node. If the new node does not have any new neighbors, then the algorithm goes to the previous node, looks at the next neighbor of this node and continues from there. Initially all nodes are “unmarked” and the algorithm proceeds by marking nodes as being in one of three states: visited nodes are marked as “visited”; nodes that we’ve marked to visit, but have not visited yet are marked “to-visit”; and unmarked nodes that have not been marked or visited are “unvisited”.

Consider a bedroom with the following possible hiding locations: (1) Under Bed, (2) Behind Cabinet, (3) In Closet, (4) Under Clothes, (5) Behind Curtains, (6) Behind Bookshelf, and (7) Under Desk. We can visualize how the bedroom is arranged as a graph and then use a Breadth First Search algorithm to show how Brent would search the room. Consider the following bedroom arrangement, where we have replaced the names of each item by the number corresponding to that item. Node (0) corresponds to the door, which is where Dennis stands and counts while others hide.

Bedroom Items as a Graph

Now consider how a Breadth First Search would be run on this graph.

Bedroom Items as a Graph Colored by DFS

The colors correspond to the order in which nodes are visited in Depth-First-Search.

The way we read this is that initially Dennis would start at node 0, which is colored in Blue.
While Dennis is at node 0, she notices that nodes 1, 5, and 6 (under bed, behind curtains, and behind bookshelf) are the nearby and have not been checked yet so she places them on the “to visit” list.
Next, Dennis will begin to visit each node on the “to visit” list, and when a node is visited, she labels it as visited. At each location, she also takes note of the other locations she can reach from this location. Below is the order of nodes Dennis visits and how he discovers new locations to visit.

Order VisitedNodeQueueAddingDistance From Node 0
106,5,10
265,17,3,21
373,2,5,12
432,5,12
525,142
645,13
7511
811

Here is a link to my Examples page that implements the Depth-First-Search Algorithm on Arbitrary Graphs.

The Breadth-First-Search Algorithm

I remember when I was younger I used to play the game of hide-and-seek a lot. This is a game where a group of people (at least two) separate into a group of hiders and a group of seekers. The most common version of this that I’ve seen is having one person as the seeker and everyone as hiders. Initially, the seeker(s) is given a number to count towards and close their eyes while counting. The hiders then search for places to hide from the seeker. Once the seeker is finished counting, their job is to find where everyone is hiding or admitting that they cannot find all the seekers. Any seekers not found are said to have won, and seekers that are found are said to have lost.

I played this game a number of times in my childhood, but I remember playing it with a friend named Brenda in particular. Brenda had a certain way she played as seeker. While many of us would simply go to places we deemed as “likely” hiding spots in a somewhat random order, Brenda would always take a survey of the room, and no matter where she began searching, she would always make note of the locations close to her starting point and make sure she was able to give them all a look before she looked at locations that were close to the points she deemed close to the starting point. She continued this process until she either found everybody or concluded that she had searched every spot she could think of and gave up.

It wasn’t until years later that I was able to note the similarity between Brenda’s way of playing hide-and-seek and the Breadth-First-Search algorithm. The Breadth-First-Search algorithm is a way of exploring all the nodes in a graph. Similarly to hide-and-seek, one could choose to do this in a number of different ways. Breadth-First-Search does this by beginning at some node, looking first at each of the neighbors of the starting node, then looking at each of the neighbors of the neighbors of the starting node, continuing this process until there are no remaining nodes to visit. Initially all nodes are “unmarked” and the algorithm proceeds by marking nodes as being in one of three stages: visited nodes are marked as “visited”; nodes that we’ve marked to visit, but have not visited yet are marked “to-visit”; and unmakred nodes that have not been marked are “unvisited”.

Consider a bedroom with the following possible hiding locations: (1) Under Bed, (2) Behind Cabinet, (3) In Closet, (4) Under Clothes, (5) Behind Curtains, (6) Behind Bookshelf, and (7) Under Desk. We can visualize how the bedroom is arranged as a graph and then use a Breadth First Search algorithm to show how Brenda would search the room. Consider the following bedroom arrangement, where we have replaced the names of each item by the number corresponding to that item. Node (0) corresponds to the door, which is where Brenda stands and counts while others hide.

Bedroom Items as a Graph

Now consider how a Breadth First Search would be run on this graph.

Bedroom Items as a Graph

The colors correspond to the order in which nodes are visited in Breadth-First-Search.

The way we read this is that initially Brenda would start at node 0, which is colored in Blue.
While Brenda is at node 0, she notices that nodes 1, 5, and 6 (under bed, behind curtains, and behind bookshelf) are the nearby and have not been checked yet so she places them on the “to visit” list.
Next, Brenda will begin to visit each node on the “to visit” list, and when a node is visited, she labels it as visited. At each location, she also takes note of the other locations she can reach from this location. Below is the order of nodes Brenda visits and how she discovers new locations to visit.

Order VisitedNodeQueueAdding Distance From Node 0
101, 5, 60
215, 62, 41
356, 2, 41
462, 43, 71
524, 3, 72
643, 72
7372
872

Here is a link to my Examples page that implements the Breadth-First-Search Algorithm on Arbitrary Graphs.