Triangle Sum Puzzle

This is probably a consequence of being a mathematician, but I have always enjoyed number puzzles. I think that there is a general simplicity and universality in numbers that are not present in things like word puzzles, where the ability to reach a solution can be limited to the vocabulary of the user.

The fact that these are puzzles and not simply homework exercises also helps because we often find people sharing difficulties and successes stories over the water cooler or at the lunch table. The fact that many of these math puzzles can teach some of the same concepts as homework problems (in a more fun and inclusive way) is generally lost on the user as their primary interest is generally on solving the puzzle in front of them, or sometimes solving the more general form of the puzzle.

Today’s post is about a puzzle that was originally shared with me over a lunch table by a friend who thought it was an interesting problem and asked what I thought about it. I didn’t give the puzzle much further thought (he had correctly solved the puzzle) until I saw it again in “Algorithmic Puzzles” by Anany Levitin and Maria Levitin. It was then that I thought about the more general form of the puzzle, derived a solution for the problem, and decided to code it up as a script for my site.

Below is a link to the puzzle:

We have a set of random numbers arranged in a triangle and the question is to find the path of maximum sum from the root node (the top node) to the base (one of the nodes on the bottom row) with the rules that
(1) Exactly one number must be selected from each row
(2) A number can only be selected from a row if (a) it is the root node or (b) one of the two nodes above it has been selected.

For the sample
So for the sample problem in the picture, the maximal path would go through nodes 57, 99, 34, 95, and 27.

For more of these puzzles check out the script I write here and be sure to let me know what you think.

The Bridge Crossing Problem

Most puzzles are fun in their own right. Some puzzles are so fun that they have the added benefit that they are likely to come up in unexpected places, like maybe in a job interview. I was recently reading a paper by Günter Rote entitled “Crossing the Bridge at Night” where Rote analyzes such a puzzle. Upon finishing the paper, I decided to write a script so that users could see the general form of this puzzle.

The problem can be stated as follows: There is a set of people, lets make the set finite by saying that there are exactly n people, who wish to cross a bridge at night. There are a few restrictions that make crossing this bridge somewhat complicated.

  • Each person has a travel time across the bridge.
  • No more than two people can cross the bridge at one time.
  • If two people are on the bridge together, they must travel at the pace of the slower person.
  • There is only one flashlight and no party (of one or two people) can travel across the bridge without the flashlight.
  • The flashlight cannot be thrown across the bridge, and nobody can go to the store to purchase another flashlight

The image above shows the optimal solution when the 4 people have travel times of 1, 2, 5, and 10. The script I have written allows users to work with different numbers of people with random travel times. Give it a try and see if you can spot the patterns in the solution.

Magical Squares Game

Whether introduced as children in elementary school, as adults in the workplace, or somewhere in between, the concept of magic squares has fascinated people for centuries; The Wikipedia article has discoveries of magic squares dating back to 650 B.C. in China.

Magic Squares of size n (for n >= 3) are n by n grids where the numbers 1, 2, …, n2 are specially arranged such that the sum of each row, column, and diagonal all sum to the same number. Below are two examples of magic squares of size 3 and 4.

8 3 4
1 5 9
6 2 7
4 1 16 13
15 11 2 6
10 14 7 3
5 8 9 12

Notice first, that in the first square all the numbers 1, 2, 3, .., 9 are used in the square. Likewise, the numbers 1, 2, …, 15, 16 are used in the second square. Second notice that each row, column and diagonal sums to 15 in the first square and 34 in the second square.

I have recently published a puzzle that is based on the concept of magic squares. There are some slight differences though.
– First, instead of using the numbers 1, 2, …, n2 a random set of numbers are generated.
– Second, the rows and columns each have a desired sum that we would like the numbers in the row/column to sum to.

These two changes allow for a puzzle concept to be formulated based on the magic square concept. Users take turns swapping elements until the numbers in each row and column sum to the number in their goal cell, which is located in the last column or row of the grid. Above is an image of a solved puzzle.

Users can determine their progress the numbers in the next-to-last column, which tells the current sum of the numbers in that respective row or column.

To swap two numbers, first click on a cell with one of the two numbers in it. The cell should then turn red. Then click on the cell with the other number in it and the numbers should swap. If you click on the same cell twice no action should take place (except for the cell to turn red and then blank again).

Take a moment to check out the puzzles and let me know what you think.

QR Decomposition

Suppose we have a problem that can be modeled by the system of equations Ax = b with a matrix A and a vector b. We have already shown how to use Gaussian Elimination to solve these methods, but I would like to introduce you to another method – QR decomposition. In addition to solving the general system of equations, this method can also be used in a number of other algorithms including the linear regression analysis to solve the least squares problem and to find the eigenvalues of a matrix.

Generally, when we see a system of equations, the best way to proceed in solving it is Gaussian elimination, or LU decomposition. However, there are some very special matrices where this method can lead to rounding errors. In such cases, we need a better, or more stable algorithm, which is when the QR decomposition method becomes important.

Any matrix A, of dimensions m by n with m >= n, can be represented as the product of two matrices, an m by m orthogonal matrix Q, i.e. QTQ = I, and an m by n upper triangular matrix R with the form [R 0]T. We perform this decomposition by will first converting the columns of the matrix A into vectors. Then, for each vector ak, we will calculate the vectors uk and ek given by

uk = i = 1 to k-1projej ak
and
ek = uk / ||uk||
Then Q = [e1, …, en] and

R =
<e1, a1> <e1, a2> <e1, a3>
0 <e2, a2> <e2, a3>
0 0 <e3, a3>

Here is a link to the JavaScript program I wrote to show how QR Decomposition works.

The A* Algorithm

As a child I remember traveling on road trips, sitting in the back of a car trying to do my best to keep myself busy for what would occupy the next six to ten hours of my life. One of the things I grew to like were the simple maze books that were sold on the magazine racks at some of the gas stations where we’d stop for food. There are two basic strategies I employed for solving these mazes: For simpler mazes, I could generally just take a look at the overall maze structure, decide upon a path through the maze, then write a path without any mistakes. For more complex mazes though, I would generally begin a route that looks the most promising. If that route reaches a point where I can see that it will be impossible to finish, then I’d go back to where the decision was made, exclude the route I had just tried, and select the “most promising” remaining route. I’d continue this process until I had completed the maze, or until it became simple enough for me to solve the maze using only my memory.

The A* Algorithm works in a similar manner to the second approach I just described. We begin at a starting point, and consider where to move next from that starting point. The set of possible options for this next move is determined by the neighbor function for a given cell. For each neighbor the algorithm estimates the length of the route through that cell by calculating the sum of two values, f(n) = g(n) + h(n), where

g(n) is the (known) cost to travel from the starting point to the cell n.
h(n) is the (approximate) cost to travel from the cell n to the final cell.

The sum g(n) + h(n) allows us to approximate the total cost of a route through the cell n.

The A* algorithm begins at the starting position of the maze. There are two sets we will be considering throughout the process of determining the optimal route, called the closedSet and the openSet. The elements of closedSet are the nodes whose total distance from the starting position has been calculated, whereas the elements of openSet represents nodes whose total distance is still under consideration. There is also a map called prev which is used to reconstruct the path. Below is how the algorithm operates:

A* Algorithm Pseudocode
closedSet is the empty set
openSet = {start}
prev is the empty set

While there are still elements in openSet,
     Find the element c* in openSet with the lowest value f(c).
     If c* is the target position
          Reconstruct the path.
     Else If c* is not the target position,
          Remove c* from openSet
          Add c* to closedSet
          For each neighbor n of c* that is not in closedSet,
               Calculate a temporary g value, temp_g(n) = g(c*) + dist(c*, n).
               If n is not in openSet, or if n is in openSet and temp_g(n) < g(n),
                    Set prev(n) = c*
                    Set g(n) = temp_g(n)
                    Set f(n) = g(n) + h(n).
                    if n is not in openSet
                         add n to openSet.
     End If
End While
End Algorithm

An important question becomes what makes a good heuristic function to approximate the distance to the goal. This can lead to an in depth discussion based on the word "good", but the necessary condition for any heuristic is that it NEVER over-estimates the cost of the path from the cell to the goal. Some examples of possible heuristics for mazes are the Euclidean Distance (the square root of the sum of the squares of the horizontal and vertical differences in distances, i.e. dE(x, y) = (i(xi – yi)2) and TaxiCab Distance (the sum of the differences in the horizontal and vertical dimensions, i.e. dT(x, y) = i|xi – yi|). Both of these are feasible metrics for heuristics on a maze. Other herusitics, like h(n) = 0 for all n are possible, but doing this would make the algorithm treat all cells equally and ignore the heuristic part of the A* algorithm, turning it into Dijkstra’s algorithm.

I’ve written a script that generates random mazes and uses the A* algorithm to find the optimal path through this maze. For this script, I used the taxicab distance heuristic.

Check it out and let me know what you think.

Naive Bayesian Classification

Suppose there is a sequence of events that took place e1, …, ene, with each event belonging to a certain classification group g1, …, gng. Then the problem of determining which of these groups a new event belongs is the classification problem.

A naive Bayes classifier will determine to which of the possible classification groups a new observation belongs with the (naive) assumption that every feature of this new observation has no relationship to any other feature of this observation.

This assumption of independence of the columns of the feature vector allows us to use Bayes’ Theorem to determine the probability that the new observation will belong to each of the observation groups. Bayes’ Theorem will then say that the probability this new observation belongs to a classification group, given the features is equal to the probability of the occurrence of that classification group in the observed data (i.e. P(C)) multiplied by the conditional probability of the joint distribution of the features given the same classification group P(F1, …, Fnf. The naive assumption allows us to quickly calculate the joint distribution of the features, given the classification group as the product of each feature given that same classification group.

This can be written as:

P(C | F1, …, Ffn) =
P(C) P(F1, …, Ffn | C)
P(F1, …, Ffn)
= P(C) P(F1 | C) * … * P(Ffn | C
P(C) P(F1, …, Ffn | C)
P(F1, …, Ffn)
= P(C) i = 1 to nfP(Fi | C)
P(C) P(F1, …, Ffn | C)
P(F1, …, Ffn)

So suppose we have observations that give the following data:
P(F1 | N) = 0.286
P(F2 | N) = 0.143
P(F3 | N) = 0.429
P(F4 | N) = 0.429

P(F1 | Y) = 0.143
P(F2 | Y) = 0.571
P(F3 | Y) = 0.571
P(F4 | Y) = 0.571

P(Y) = 0.5
P(N) = 0.5

Then P(N | F1, F2, F3, F4) = (0.5 * 0.286 * 0.143 * 0.429 * 0.429)
= 0.008

Then P(Y | F1, F2, F3, F4) = (0.5 * 0.143 * 0.571 * 0.571 * 0.571)
= 0.001

After we normalize the two terms, we wind up with Then P(N | F1, F2, F3, F4) = .2204

and

Then P(Y | F1, F2, F3, F4) = .7796

So the naive Bayes classifier says that the likely classification group for this observation is Y.

Check out my examples page for more examples of the naive Bayes classifier.

Hierarchical Clustering

Hierarchical Clustering algorithms give a nice introduction for computer science students to unsupervised machine learning. I say this because the bottom-up approach to Hierarchical clustering (which I have implemented here) is very similar to Kruskal’s algorithm for finding the minimum spanning tree of a graph.

In Kruskal’s algorithm, we begin by creating a forest, or a set of trees where each node is its own tree. The algorithm then selects the two trees that are closest together (closest being defined as the minimum cost edge between two distinct trees) and merges those trees together. This process of merging the closest two trees is then repeated until there is only one tree remaining, which is a minimum spanning tree of the graph.

Similarly, bottom-up hierarchical clustering of a group of points begins by saying that each point is its own cluster. Then the clusters are compared to one another to check if two clusters will be merged into one. Generally, there will be some stopping criteria, , saying that we do not want to merge two clusters together if their distance is greater than . So if the minimum distance between two clusters is less than we will proceed as in Kruskal’s algorithm by merging these two clusters together into one cluster. We repeat this process of merging the closest two clusters together until we find that the minimum distance between two clusters is greater than or equal to , in which case we can stop and the result is a partition of our data set into distinct clusters.

Hierarchical clustering is comparable to K-Means Clustering. Here are some differences between the two approaches:

  1. K-Means Clustering requires an initial number of desired clusters, while Hierarchical clustering does not.
  2. A run of K-Means Clustering will always give K clusters, whereas Hierarchical Clustering can give more or less, depending on our tolerance .
  3. K-Means can undo previous mistakes (assignments of an element to the wrong cluster), while Hierarchical Clustering cannot.

So, here is a link to my page on Hierarchical Clustering. Hope you enjoy.

Introduction to JavaScript Programming

Here is a link to my sample JavaScript code.

I received a lot of attention from friends interested in programming after my recent blog post entitled “Introduction to Python Programming”. While many found it interesting, the fact that Python is more useful to mathematicians hindered sine of my friends desire to learn it as their first language.

In out conversations, my recommendation for a first language was JavaScript. This is a powerful language in the sense that just about anybody who is involved with the internet knows it, and it’s likely to boost a person’s resume. It also has many similarities to more powerful languages like C++ and Java, so while not trivial, it could be a good launch pad into more advanced languages. But my favorite reason is that unlike many other programming languages that rely in an MS-DOS like command like approach for run time interaction, JavaScript’s basic interaction is with the standard internet browsers we use everyday. There isn’t even anything you need to download or install. Just create a basic HTML file in a text editor (like notepad, wordpad, or notepad++). This makes it easier to show off your creations which makes learning more fun.

The script I’ve finished provides examples on writing output, declaring variables, data types, conditionals, loops, and functions. Although I do not go into detail about all the events and objects on an HTML page, I do finish with three examples of more advanced JavaScript programs. Once you’ve selected a program, the code well be revealed in the text area. There is also a button that, when clicked, will execute that script on a new HTML tab.

I hope you enjoy, and let me know if you have any suggestions or comments.

With that being said, here is a link to my sample JavaScript code.

Simple Linear Regression

I finished a script that helps explain the concepts of simple linear regression.

We live in a world that is filled with patterns – patterns all around us just waiting to be discovered. Some of these patterns are not as easily discovered because of the existence of outside noise.

Consider for example an experiment where a set of people were each given the task of drinking a number of beers and having their blood alcohol level taken afterwards. Some noise factors in this could include the height and weight of the individual, the types of drinks, the amount of food eaten, and the time between drinks. Even with this noise, though, we can still see a correlation between the number of drinks and their blood alcohol level. Consider the following graph showing people’s blood-alcohol level after a given number of drinks. The x-axis represents the number of drinks and the y-axis is the corresponding blood alcohol level.

Example of Linear Regression

x 5 2 9 8 3 7 3 5 3 5 4 6 5 7 1 4
y 0.10 0.03 0.19 0.12 0.04 0.095 0.070 0.060 0.020 0.050 0.070 0.10 0.085 0.090 0.010 0.050

We can definitely see a correlation, and although the data doesn’t quite fit on a straight line. It leads us to ask further questions like can we use this data to build a model that estimates a person’s blood-alcohol level and how strong is this model?

One of the tools we can use to model this problem is linear regression. A linear regression takes a two-dimensional data set, with the assumption that one column (generally represented by the x variable) is independent and the second column (generally represented by the y variable) being dependent on the first column. The assumption is that the relationship between the two columns is linear and can be represented by the linear equation

y = 0 + 1x + e.

The right hand side of the above equation has three terms. The first two (0 and 1) are the parameters of the linear equation (the y-intercept and slope respectively), while the third term of the right hand side of the above equation represents the error term. The error term represents the difference between this linear equation and the y values in the data provided. We are seeking a line that minimizes the error term. That is, we are seeking to minimize

D = i = 1 to n [yi – (0 + 1xi)]2

There are several ways one could approach this problem. In fact, there are several lines that one could use to build a linear model. The first line that one may use to model these points is the one generated by only mean of the y values of the points, called the horizontal line regression.

For the data set above, the mean of the y values can be calculated as = 0.0738, so we could build a linear model based on this mean that would be y = 0.738. This horizontal line regression model is a horizontal line that predicts the same score (the mean), regardless of the x value. This lack of adjustments means it is generally a poor fit for most models. But as we will see later, this horizontal line regression model does serve a purpose in determining how well the model we develop performs.

A second attempt at solving this problem would be to generate the least squares line. This is the line that minimizes the D value listed above. We can see that D is a multi-variable polynomial, and we can find the minimum of such a polynomial using calculus, partial derivatives and Gaussian elimination (I will omit the work here because it deters us from the main point of this blog post, however Steven J. Miller has a good write-up of this).

The calculus leads us to the following equations:

SXY = i = 1 to n(xy) –
(i = 1 to nx)(i = 1 to ny)

n
SXX = i = 1 to n(x2) –
(i = 1 to nx)2

n
1 =
SXY

SXX
0 = 1

To calculate the least squares line for this example, we first need to calculate a few values:
i = 1 to n(xy) = 6.98
i = 1 to n(x2) = 443
i = 1 to nx = 77
i = 1 to ny = 1.18
Sxx = 72.44
Sxy = 1.30

This lets us evaluate that
1 = 0.018
and
0 = -0.0127

So the resulting linear equation for this data is

= -0.0127 + 0.018*x

Below is a graph of the two attempts at building a linear model for this data.

Example of Models of Linear Regression

In the above image, the green line represents the horizontal line regression model and the blue line represents the least-squares line. As stated above, the horizontal line regression model is a horizontal line that does not adjust as the data changes. The least-squares line adjusts both the slope and y-intercept of this line according to the data provided to better fit the data provided. The question becomes how well does the least-squares line fit the data.

The Sum of Squares Error (SSE) sums the deviation at each point of our data from the least-squares line.

SSE = i = 1 to n(yii)2

A second metric that we are interested in is how well the horizontal line regression linear model estimates our data. This is called the Total Sum of Squares (SST).

SST = i = 1 to n(yi)2

The horizontal line regression model ignores the independent variable x from our data set and thus any line that takes this independent variable into account will be an improvement on the horizontal line regression model. Because of this, the SST sum is a worse case scenario of how poorly our model can perform.

Knowing now that SST is always greater than SSE, the regression sum of squares (SSR) is the difference between the total sum of squares and the sum of squares error.

SSR = SST – SSE

This tells us how much of the total sum of squares is accounted for by the model.

Finally, the coefficient of determination (r2) is defined by

r2 = SSR / SST

This tells SSR as a percentage of SST, or the amount of the variation in the data that is explained by the model.

So, check out my script on simple linear regression and let me know what you think.

Learning About Truth Tables

Here is a link to my sample Truth Table Generator.

Truth Tables Generator Image

An important part of mathematics and computer programming is understanding conditional expressions. These are statements that generally read like “if [condition is true], then [execute a sequence of statements]”. A simple example of this is that if we wanted to print out every even number, our conditional would be if (2 divides into x with remainder 0) then print x, or in JavaScript

if (x % 2 == 0)
{
document.write(“x”);
}

Conditional expressions belong to the world of Boolean logic. These are expressions that evaluate to true or false, depending on the values of the variables involved in this expression. When we are dealing with real world examples, this is generally a statement like “X is an even number” (for some number X) or “The element x is in the set Y” (for some element x and some set Y). Notice that both the statements can be evaluated as true or false statements. We are interested in understanding the Boolean logic behind combining a number of these expressions, and understanding how the evaluation of the simpler expressions help determine the values of the more complex formulas.

One way of doing this in mathematics is by constructing a truth table. A truth table is a table that shows how a Boolean expression’s value can be computed. The procedure in constructing a truth table is to first add a column to the table for each variable involved in the expression. Then we compute the value of each sub-expression of the expression in its own column until we have computed the entire expression in the final column.

There are four logical operators that we will be working with

– The negation operator (¬P), which returns true if the variable P is false, and returns false otherwise.
– the or operator (P ? Q), which returns true if P is true or Q is true, or if both are true, and returns false otherwise.
– the and operator (P ? Q), which returns true if both P and Q are true, and returns false otherwise.
– the implies operator (P ? Q), which returns false if P is true and Q is false, and returns true otherwise.

An example of this is below:

Suppose we have the following formula:
((Q ? P) ? (¬ (Q ? R)))

The truth table would then be:

P Q R (Q ? R) (¬ (Q ? R)) (Q ? P) ((Q ? P) ? (¬ (Q ? R)))
F F F F T F T
F F T F T F T
F T F F T F T
F T T T F F T
T F F F T F F
T F T F T F F
T T F F T T T
T T T T F T T

First, notice that the truth table has 8 rows. This corresponds to the 8 distinct possible combinations of values for the three variables. The number 8 is also = 23, which is not a mistake. In general, if an expression has n variables, its corresponding truth table will contain 2n rows. In this truth table, column 4 represents the sub-expression (Q ? R). Notice that the only times this expression evaluates to true is when both Q nd R are true. The next column is (¬ (Q ? R)), the negation of (Q ? R), so the values in this column are the opposite of those in the previous column. We follow that up with the column for the sub-expression (Q ? P), which has a value of true only when the variables Q and P both have values of true. And the final column is the expression we started with ((Q ? P) ? (¬ (Q ? R))), but we can now evaluate the expression based on the two previous columns. Notice that the values of true in this column only correspond to when at least one of the previous two columns evaluated to true.

Check out my script on truth tables to see more examples and learn more about truth tables.