Tag Archives: matrix

Discrete-time Markov Chains

Much of how we interact with life could be described as transitions between states. These states could be weather conditions (whether we are in a state of “sunny” or “rainy”), the places we may visit (maybe “school”, “the mall”, “the park” and “home”), our moods (“happy”, “angry”, “sad”). There are a number of other ways to model states and even the possibility of infinitely many states.

Markov Chains are based on the principle that the future is only dependent on the immediate past. so for example, if I wished to predict tomorrow’s weather using a Markov Chain, I would need to only look at the weather for today, and can ignore all previous data. I would then compare the state of weather for today with historically how weather has changed in between states to determine the most likely next state (i.e what the weather will be like tomorrow). This greatly simplifies the construction of models.

To use Markov Chains to predict the future, we first need to compute a transition matrix which shows the probability (or frequency) that we will travel from one state to another based on how often we have done so historically. This transition matrix can be calculated by looking at each element of the history as an instance of a discrete state, counting the number of times each transition occurs and dividing each result by the number of times the origin state occurs. I’ll next give an example and then I’ll focus on explaining the Finite Discrete State Markov Chain tool I built using javascript.

Next, I want to consider an example of using Markov Chains to predict the weather for tomorrow. Suppose that we have observed the weather for the last two weeks. We could then use that data to build a model to predict tomorrow’s weather. To do this, lets first consider some states of weather. Suppose that a day can be classified in one of four different ways: {Sunny, Cloudy, Windy, Rainy}. Further, suppose that over the last two weeks we have seen the following pattern.

Day 1 Sunny
Day 2 Sunny
Day 3 Cloudy
Day 4 Rain
Day 5 Sunny
Day 6 Windy
Day 7 Rain
Day 8 Windy
Day 9 Rain
Day 10 Cloudy
Day 11 Windy
Day 12 Windy
Day 13 Windy
Day 14 Cloudy

We can look at this data and calculate the probability that we will transition from each state to each other state, which we see below:

Rain Cloudy Windy Sunny
Rain 0 1/3 1/3 1/3
Cloudy 1/2 0 1/2 0
Windy 2/5 1/5 2/5 0
Sunny 0 1/3 1/3 1/3

Given that the weather for today is cloudy, we can look at the transition matrix and see that historically the days that followed a cloudy day have been Rainy and Windy days each with probability of 1/5. We can see this more mathematically by multiplying the current state vector (cloudy) [0, 1, 0, 0] by the above matrix, where we obtain the result [1/2, 0, 1/2, 0].

In similar fashion, we could use this transition matrix (lets call it T) to predict the weather a number of days in the future by looking at Tn. For example, if we wanted to predict the weather two days in the future, we could begin with the state vector [1/2, 0, 1/2, 0] and multiply it by the matrix T to obtain [1/5, 4/15, 11/30, 1/6].

We can also obtain this by looknig at the original state vector [0, 1, 0, 0] and multiplying it by T2.

T2 =

1

3/10 8/45 37/90 1/9
1/5 4/15 11/30 1/6
13/50 16/75 59/150 2/15
3/10 8/45 37/90 1/9

When we multiply the original state vector by T2 we arrive at this same answer [1/5, 4/15, 11/30, 1/6]. This matrix T2 has an important property in that every state can reach every other state.

In general, if we have a transition matrix where for every cell in row i and column j, there is some power of the transition matrix such that the cell (i, j) in that matrx is nonzero, then we say that every state is reachable from every other state and we call the Markov Chain regular.

Regular Markov Chains are important because they converge to what’s called a steady state. These are state vectrs x = [x0, …, xn] such that xTn = x for very large values of n. The steady state tells us how the Markov Chain will perform over long periods of time. We can use algebra and systems of linear equations to solve for this steady state vector.

For the Javascript program I’ve written, I have generated a set of painting samples for a fictional artist. The states are the different colors and the transitions are the colors that the artist will use after other colors. as well as the starting and ending colors. Given this input, we can form a Markov Chain to understand the artist’s behavior. This Markov Chain can then be used to solve for the steady state vector or to generate random paintings according to the artist’s profile. Be sure to check it out and let me know what you think.

QR Decomposition

Suppose we have a problem that can be modeled by the system of equations Ax = b with a matrix A and a vector b. We have already shown how to use Gaussian Elimination to solve these methods, but I would like to introduce you to another method – QR decomposition. In addition to solving the general system of equations, this method can also be used in a number of other algorithms including the linear regression analysis to solve the least squares problem and to find the eigenvalues of a matrix.

Generally, when we see a system of equations, the best way to proceed in solving it is Gaussian elimination, or LU decomposition. However, there are some very special matrices where this method can lead to rounding errors. In such cases, we need a better, or more stable algorithm, which is when the QR decomposition method becomes important.

Any matrix A, of dimensions m by n with m >= n, can be represented as the product of two matrices, an m by m orthogonal matrix Q, i.e. QTQ = I, and an m by n upper triangular matrix R with the form [R 0]T. We perform this decomposition by will first converting the columns of the matrix A into vectors. Then, for each vector ak, we will calculate the vectors uk and ek given by

uk = i = 1 to k-1projej ak
and
ek = uk / ||uk||
Then Q = [e1, …, en] and

R =
<e1, a1> <e1, a2> <e1, a3>
0 <e2, a2> <e2, a3>
0 0 <e3, a3>

Here is a link to the JavaScript program I wrote to show how QR Decomposition works.

The Assignment Problem

I just finished a script that generates instances of the assignment problem and solves them step by step. You can check it out here.

Assignment Problem Image

Suppose you are the owner of a company and need to delegate tasks to your employees. You’ve generated a table that tells how long (in minutes) you think it would take each person to accomplish each individual task (called Jobs). Your goal is to find an assignment of people to jobs that minimizes the total amount of time it will take to complete all jobs. The requirements are that each job must be completed by only one person, and each person can complete only one job.

We can think of the employees at the job as our supply and the tasks as the demand. In order for this problem to have a feasible solution, we must have enough people (supply) to complete the number of jobs (demand). Because of this, our examples will all include situations where there are exactly the same number of people as jobs.

To solve this problem, we must first generate an initial assignment and see how good this assignment is. There are several ways of generating an initial solution, but two that I wanted to focus on are the “NorthWest Corner Rule” and the “Minimum Matrix Method”.

  1. The Northwest Corner Rule considers the matrix and repeatedly assigns the top remaining row to the left-most remaining column. If we think of the cost matrix as a being like a map then “top” becomes similar to “north” and left-most becomes similar to “west”, hence the derivation of the name. In assignment problems, this will result in the main diagonal being selected.
  2. The Minimum Matrix Method is an iterative method that searches the matrix for the minimum cell in the matrix and assigns that person to the associated job and removes them from consideration and repeats itself until all people have been assigned to jobs.

Once we have formulated an initial feasible solution, we need to check it for optimality. To do this, we use the Network Simplex Method, where we build a basis based on this initial solution. When we consider this problem as a network flow problem, a basis for the problem is a spanning tree (one less than twice the number of nodes in the graph that does not have any cycles) of the network. Because the assignment solution only contains one edge for every two nodes in the graph, we need to add a number of edges to the basis that contains no flow (which makes the solution degenerate) to form this spanning tree.

Once a spanning tree is formulated, we can solve for the dual variables by arbitrarily setting one node’s dual value to zero and solving for the remaining dual variables under the requirement that all arcs in the basis (spanning tree) must satisfy the equation uk + vi = cki for each person k and job i.

When we have dual variables, we can check to see if this solution is optimal by checking to see if all the other constraints are violated. This means that for every person k and every job i, we must have uk + vi cki (notice that this is a more relaxed version of what we had when we were solving for the dual variables themselves).

If a constraint is found to be violated, then we need to add the associated edge to the basis and remove an edge on the cycle that is formulated as a result, which generates a new solution.

So check out The Assignment Problem Script and let me know what you think.

My Sudoku Program

Earlier this week, I was able to write a script that solves the popular Sudoku game. It was a fun experience because along the way I learned about a new way to implement the backtracking algorithm on exact cover problems.

Say we have a set of items. They could be practically anything, but for the sake of understanding lets think of something specific, like school supplies. In particular lets say that school is starting back and you have a checklist of things you need to buy:

  • a pencil
  • a pen
  • a notebook
  • a spiral tablet
  • a ruler
  • a calculator
  • a lamp
  • a bookbag (or knapsack as I just learned people call them)
  • and some paper

Suppose also that as you’re shopping for these items you also see that stores sell the items as collections. In order to spend less money, you think it may be best to buy the items in collections instead of individually. Say these collections are:

  • a pencil, a notebook and a ruler
  • a pen, a calculator and a bookbag
  • a spiral tablet a lamp and some paper

The exact cover problem is a very important problem in computer science because many other problems can be described as problems of exact cover. Because the problem is in general pretty difficult to solve most strategies for solving these problems still generally take a long amount of time to solve. A common technique for solving these problems is called backtracking. This is basically a trial and error way of solving the problem. We try to construct a solution to the problem, and each time we realize our (partial) solution will not work, we realize the error, backup by eliminating part of the current solution and try to solve the problem by constructing another solution. This is basically how the backtracking procedure works. The main caveat to it is that we need to keep track of the partial solutions we have already tried so that we do not continuously retry the same solutions over and over again.

Example Sudoku Board
This is an example of a Sudoku board

In particular Sudoku puzzles can be described as exact cover problems. Doing this involves translating the rules of Sudoku into exact cover statements. There are four basic rules of Sudoku that we need to take into consideration.

  • Each cell in the grid must receive a value
  • a value can only appear once in each row
  • a value can only appear once in each column
  • a value can only appear once in each pre-defined 3 by 3 subgrid.

In actuality these statements are joined together and what happens is that each position in the grid we (try to) fill in actually has effects in all four sections. We can set up the Suduko problem as an exact cover problem by noting these effects.

  • If a cell (i, j) in the grid receives the value v, then we also need to note that row i,  column j, and the pre-defined subgrid have also received its proper value.

We can keep track of this by defining a table.

An Exact Cover Representation of a Sudoku Move
This is how we construct the new table from a given Sudoku move.
  • The first 81 (81 = 9*9) columns in the table will answer the question of does row i, column j have anything in that position. This will tell us whether the cell is empty or not, but will not tell us the value in the cell.
  • The next 81 columns in the table will answer the question of does row i (of the Sudoku grid) have the value v in it yet.
  • The next 81 columns in the table will answer the question of does column j (of the Sudoku grid) have the value v in it yet.
  • The final 81 columns in the table will answer the question of does the (3×3) subgrid containing (i, j) have the value v in it yet.

Notice that individually these columns do not give us much information about the grid, but because each value we place into the grid also has a precise location, a row, a column, a value and a subgrid we link together these columns of the new table and are able to understand more about the implications of each Sudoku operation. There are 9 possible rows, 9 possible columns and 9 possible values for each move, creating 729 possible moves. For each possible move, we create a row in the new table which links the columns of this new table together as follows:

  • row[i*9+j] = 1
    (this says that cell (i, j) is nonempty)
  • row[81+i*9+v] = 1
    (this says that cell row i has value v)
  • row[81*2+j*9+v] = 1
    (this says that column j has value v)
  • row[81*3+(Math.floor(i/3)*3+Math.floor(j/3))*9+v] = 1
    (this says that the 3 x 3 subgrid of (i, j) has value v)

Our goal is to choose a set of moves that fills in the grid. In the exact cover representation of the problem, this means we need to select a set of rows of the new table such that each column of the new table has exactly one 1 in our solution.

In my script I solve this problem using backtracking and the dancing links representation of this exact cover problem. The way dancing links works is that the exact cover version of this problem is translated from being a simple table to a graphical one where each cell becomes a node with pointers to the rows above and below it and the columns to the left and to the right of it. There is also a header row of column headers which contain information about that column, particularly the column number and the number of cells in that column that can cover it (basically the number of 1’s in the table form of the exact cover version of the problem). There is also a head node which points to the first column.

Once the graph version of the matrix is set up, the algorithm recursively tries to select a column and then a row in that column. Once these values have been chosen it tries to solve the new version of the problem with fewer cells remaining. If this is possible then it continues, if not then it goes back and chooses another row or another column and row.

I hope that was an understandable overview of the procedures I used to implement this algorithm. Otherwise I hope you enjoy the script and use it freely to help with your Sudoku   puzzles.

Gaussian Elimination

I have just written a script which executes the Gaussian Elimination Algorithm.

When we have a collection of lines we wish to know if they all intersect at some point. Many times we are interested in determining what that point is. In order to calculate this information, we first need an understanding of the lines themselves. The way the Gaussian Elimination Algorithm works is that the collection of lines are input using a notation of Ax = b, where the matrix A is called the coefficient matrix, as the nth row of it corresponds to the coefficients for the nth line being considered. The vector b represents the right hand side vector (in two dimensions, we would call these constants the y-intercepts of the lines. In higher dimensions they hold a similar property). The vector x represents the point where the lines intersect. It is this quantity which Gaussian Elimination seeks to determine.

The basic procedure of Gaussian Elimination is to use \”elementary row operations\” on the matrix (A|b), which is called the augmented matrix, to transform A into upper triangular form. Once this is done, a procedure called back-substitution can find the solution (x) to this problem.

The elementary row operations that we are allowed to perform are:

  • Interchange two rows.
  • Multiply a row by a nonzero number.
  • Add a row to another one multiplied by a number.
  • For the last property listed above, we will determine this number by dividing the coefficient of the term we which to eliminate by the negative of the coefficient of the element on the main diagonal of the same column of the matrix. This will have the property of cancelling out, or producing a desired zero in the resulting row.

    If this algorithm produces an upper triangular matrix from which we can solve for x using back-substitution. This procedure of back-substitution is simply solving for the vector x from the bottom of the matrix to the top. If the algorithm does not produce an upper triangular matrix (because somewhere along the line, we are unable to obtain a ratio because we have zero’s on the diagonal and all zeros below the diagonal), then we say the matrix is singular. This means that there is no unique point where the lines all intersect.

    To learn more and see more examples, check out My Script on Gaussian Elimination.