Category Archives: Examples

These are updates to my examples database.

Hierarchical Clustering

Hierarchical Clustering algorithms give a nice introduction for computer science students to unsupervised machine learning. I say this because the bottom-up approach to Hierarchical clustering (which I have implemented here) is very similar to Kruskal’s algorithm for finding the minimum spanning tree of a graph.

In Kruskal’s algorithm, we begin by creating a forest, or a set of trees where each node is its own tree. The algorithm then selects the two trees that are closest together (closest being defined as the minimum cost edge between two distinct trees) and merges those trees together. This process of merging the closest two trees is then repeated until there is only one tree remaining, which is a minimum spanning tree of the graph.

Similarly, bottom-up hierarchical clustering of a group of points begins by saying that each point is its own cluster. Then the clusters are compared to one another to check if two clusters will be merged into one. Generally, there will be some stopping criteria, , saying that we do not want to merge two clusters together if their distance is greater than . So if the minimum distance between two clusters is less than we will proceed as in Kruskal’s algorithm by merging these two clusters together into one cluster. We repeat this process of merging the closest two clusters together until we find that the minimum distance between two clusters is greater than or equal to , in which case we can stop and the result is a partition of our data set into distinct clusters.

Hierarchical clustering is comparable to K-Means Clustering. Here are some differences between the two approaches:

  1. K-Means Clustering requires an initial number of desired clusters, while Hierarchical clustering does not.
  2. A run of K-Means Clustering will always give K clusters, whereas Hierarchical Clustering can give more or less, depending on our tolerance .
  3. K-Means can undo previous mistakes (assignments of an element to the wrong cluster), while Hierarchical Clustering cannot.

So, here is a link to my page on Hierarchical Clustering. Hope you enjoy.

Introduction to JavaScript Programming

Here is a link to my sample JavaScript code.

I received a lot of attention from friends interested in programming after my recent blog post entitled “Introduction to Python Programming”. While many found it interesting, the fact that Python is more useful to mathematicians hindered sine of my friends desire to learn it as their first language.

In out conversations, my recommendation for a first language was JavaScript. This is a powerful language in the sense that just about anybody who is involved with the internet knows it, and it’s likely to boost a person’s resume. It also has many similarities to more powerful languages like C++ and Java, so while not trivial, it could be a good launch pad into more advanced languages. But my favorite reason is that unlike many other programming languages that rely in an MS-DOS like command like approach for run time interaction, JavaScript’s basic interaction is with the standard internet browsers we use everyday. There isn’t even anything you need to download or install. Just create a basic HTML file in a text editor (like notepad, wordpad, or notepad++). This makes it easier to show off your creations which makes learning more fun.

The script I’ve finished provides examples on writing output, declaring variables, data types, conditionals, loops, and functions. Although I do not go into detail about all the events and objects on an HTML page, I do finish with three examples of more advanced JavaScript programs. Once you’ve selected a program, the code well be revealed in the text area. There is also a button that, when clicked, will execute that script on a new HTML tab.

I hope you enjoy, and let me know if you have any suggestions or comments.

With that being said, here is a link to my sample JavaScript code.

Simple Linear Regression

I finished a script that helps explain the concepts of simple linear regression.

We live in a world that is filled with patterns – patterns all around us just waiting to be discovered. Some of these patterns are not as easily discovered because of the existence of outside noise.

Consider for example an experiment where a set of people were each given the task of drinking a number of beers and having their blood alcohol level taken afterwards. Some noise factors in this could include the height and weight of the individual, the types of drinks, the amount of food eaten, and the time between drinks. Even with this noise, though, we can still see a correlation between the number of drinks and their blood alcohol level. Consider the following graph showing people’s blood-alcohol level after a given number of drinks. The x-axis represents the number of drinks and the y-axis is the corresponding blood alcohol level.

Example of Linear Regression

x 5 2 9 8 3 7 3 5 3 5 4 6 5 7 1 4
y 0.10 0.03 0.19 0.12 0.04 0.095 0.070 0.060 0.020 0.050 0.070 0.10 0.085 0.090 0.010 0.050

We can definitely see a correlation, and although the data doesn’t quite fit on a straight line. It leads us to ask further questions like can we use this data to build a model that estimates a person’s blood-alcohol level and how strong is this model?

One of the tools we can use to model this problem is linear regression. A linear regression takes a two-dimensional data set, with the assumption that one column (generally represented by the x variable) is independent and the second column (generally represented by the y variable) being dependent on the first column. The assumption is that the relationship between the two columns is linear and can be represented by the linear equation

y = 0 + 1x + e.

The right hand side of the above equation has three terms. The first two (0 and 1) are the parameters of the linear equation (the y-intercept and slope respectively), while the third term of the right hand side of the above equation represents the error term. The error term represents the difference between this linear equation and the y values in the data provided. We are seeking a line that minimizes the error term. That is, we are seeking to minimize

D = i = 1 to n [yi - (0 + 1xi)]2

There are several ways one could approach this problem. In fact, there are several lines that one could use to build a linear model. The first line that one may use to model these points is the one generated by only mean of the y values of the points, called the horizontal line regression.

For the data set above, the mean of the y values can be calculated as = 0.0738, so we could build a linear model based on this mean that would be y = 0.738. This horizontal line regression model is a horizontal line that predicts the same score (the mean), regardless of the x value. This lack of adjustments means it is generally a poor fit for most models. But as we will see later, this horizontal line regression model does serve a purpose in determining how well the model we develop performs.

A second attempt at solving this problem would be to generate the least squares line. This is the line that minimizes the D value listed above. We can see that D is a multi-variable polynomial, and we can find the minimum of such a polynomial using calculus, partial derivatives and Gaussian elimination (I will omit the work here because it deters us from the main point of this blog post, however Steven J. Miller has a good write-up of this).

The calculus leads us to the following equations:

SXY = i = 1 to n(xy) –
(i = 1 to nx)(i = 1 to ny)

n
SXX = i = 1 to n(x2) –
(i = 1 to nx)2

n
1 =
SXY

SXX
0 = 1

To calculate the least squares line for this example, we first need to calculate a few values:
i = 1 to n(xy) = 6.98
i = 1 to n(x2) = 443
i = 1 to nx = 77
i = 1 to ny = 1.18
Sxx = 72.44
Sxy = 1.30

This lets us evaluate that
1 = 0.018
and
0 = -0.0127

So the resulting linear equation for this data is

= -0.0127 + 0.018*x

Below is a graph of the two attempts at building a linear model for this data.

Example of Models of Linear Regression

In the above image, the green line represents the horizontal line regression model and the blue line represents the least-squares line. As stated above, the horizontal line regression model is a horizontal line that does not adjust as the data changes. The least-squares line adjusts both the slope and y-intercept of this line according to the data provided to better fit the data provided. The question becomes how well does the least-squares line fit the data.

The Sum of Squares Error (SSE) sums the deviation at each point of our data from the least-squares line.

SSE = i = 1 to n(yii)2

A second metric that we are interested in is how well the horizontal line regression linear model estimates our data. This is called the Total Sum of Squares (SST).

SST = i = 1 to n(yi)2

The horizontal line regression model ignores the independent variable x from our data set and thus any line that takes this independent variable into account will be an improvement on the horizontal line regression model. Because of this, the SST sum is a worse case scenario of how poorly our model can perform.

Knowing now that SST is always greater than SSE, the regression sum of squares (SSR) is the difference between the total sum of squares and the sum of squares error.

SSR = SST – SSE

This tells us how much of the total sum of squares is accounted for by the model.

Finally, the coefficient of determination (r2) is defined by

r2 = SSR / SST

This tells SSR as a percentage of SST, or the amount of the variation in the data that is explained by the model.

So, check out my script on simple linear regression and let me know what you think.

Learning About Truth Tables

Here is a link to my sample Truth Table Generator.

Truth Tables Generator Image

An important part of mathematics and computer programming is understanding conditional expressions. These are statements that generally read like “if [condition is true], then [execute a sequence of statements]“. A simple example of this is that if we wanted to print out every even number, our conditional would be if (2 divides into x with remainder 0) then print x, or in JavaScript

if (x % 2 == 0)
{
document.write(“x”);
}

Conditional expressions belong to the world of Boolean logic. These are expressions that evaluate to true or false, depending on the values of the variables involved in this expression. When we are dealing with real world examples, this is generally a statement like “X is an even number” (for some number X) or “The element x is in the set Y” (for some element x and some set Y). Notice that both the statements can be evaluated as true or false statements. We are interested in understanding the Boolean logic behind combining a number of these expressions, and understanding how the evaluation of the simpler expressions help determine the values of the more complex formulas.

One way of doing this in mathematics is by constructing a truth table. A truth table is a table that shows how a Boolean expression’s value can be computed. The procedure in constructing a truth table is to first add a column to the table for each variable involved in the expression. Then we compute the value of each sub-expression of the expression in its own column until we have computed the entire expression in the final column.

There are four logical operators that we will be working with

– The negation operator (¬P), which returns true if the variable P is false, and returns false otherwise.
– the or operator (P ? Q), which returns true if P is true or Q is true, or if both are true, and returns false otherwise.
– the and operator (P ? Q), which returns true if both P and Q are true, and returns false otherwise.
– the implies operator (P ? Q), which returns false if P is true and Q is false, and returns true otherwise.

An example of this is below:

Suppose we have the following formula:
((Q ? P) ? (¬ (Q ? R)))

The truth table would then be:

P Q R (Q ? R) (¬ (Q ? R)) (Q ? P) ((Q ? P) ? (¬ (Q ? R)))
F F F F T F T
F F T F T F T
F T F F T F T
F T T T F F T
T F F F T F F
T F T F T F F
T T F F T T T
T T T T F T T

First, notice that the truth table has 8 rows. This corresponds to the 8 distinct possible combinations of values for the three variables. The number 8 is also = 23, which is not a mistake. In general, if an expression has n variables, its corresponding truth table will contain 2n rows. In this truth table, column 4 represents the sub-expression (Q ? R). Notice that the only times this expression evaluates to true is when both Q nd R are true. The next column is (¬ (Q ? R)), the negation of (Q ? R), so the values in this column are the opposite of those in the previous column. We follow that up with the column for the sub-expression (Q ? P), which has a value of true only when the variables Q and P both have values of true. And the final column is the expression we started with ((Q ? P) ? (¬ (Q ? R))), but we can now evaluate the expression based on the two previous columns. Notice that the values of true in this column only correspond to when at least one of the previous two columns evaluated to true.

Check out my script on truth tables to see more examples and learn more about truth tables.

Introduction to Python Programming

Here is a link to my sample Python code.

One of the somewhat unforeseen consequences of taking a career in applied mathematics, particularly in this day and age, is that you will eventually need to write computer programs that implement the mathematical algorithms. There are several languages in which one can do this, each with its own positives and negatives and you will find that things that are simple in some are difficult in another. After speaking with a number of people, both students and professionals who work with mathematics on a regular basis, I reasoned that it may be helpful to provide some source code examples to help mathematicians get started with programming in some of these languages. I decided to start with Python because its a powerful language, available for free, and its learning curve isn’t too steep.

I will be working with Python 2.7, which can be downloaded from https://www.python.org/download/releases/2.7/. I understand, however, that a limitation to coding is the required setup often necessary before one can even write their first line of code. So while I do encourage you to download Python, I will also provide a link to the online Python compiler at Compile Online, which should allow users to simply copy and paste the code into a new tab in their browser and by simply clicking the “Execute Script” command in the upper left corner, see the output of the code.

With that being said, here is a link to my sample Python code.

Probability: Sample Spaces

I’ve been doing a few games lately (can be seen here, here and here) and, while I think those are very good ways to become interested in some of the avenues of math research, I also have had a few people come to me with questions regarding help with their classes. So I decided to write a script to try to help understand some elementary probability theory, focusing on discrete sample spaces.

Probability Image

In statistics, any process of observation is referred to as an experiment.
The set of all possible outcomes of an experiment is called the sample space and it is usually denoted by S. Each outcome in a sample space is called an element of the sample space. An event is a subset of the sample space or which the event occurs. Two events are said to be mutually exclusive if they have no elements in common.

Similar to set theory, we can form new events by performing operations like unions, intersections and compliments on other events. If A and B are any two subsets of a sample space S, then their union A ∪ B is the subset of S that contains all the elements that are in either A, in B, or in both; their intersection A ∩ B is the subset of S that contains all the elements that are in both A and B; the compliment A’ of A is the subset of S that contains all the elements of S that are not in A.

A probability is a function that assigns real numbers to events of a sample space. The following are the axioms of probability that apply when the sample space is discrete (finite or countable).

Axiom 1: The probability of an event is a non-negative real number; that is P(A) ≥ 0 for any subset A of S.
Axiom 2: The probability of the entire sample space is 1; that is P(S) = 1.
Axiom 3: If A1, A2, A3, … , is a finite or infinite sequence of mutually exclusive events of S, then
P(A1 ∪ A2 ∪ A3 ∪ …) = P(A1) + P(A2) + P(A3) + …
If A and B are any two events in a sample space S and P(A) ≠ 0, the conditional probability of B given A is

P(B | A) =
P(A ∩ B)

P(A)

Two events A and B are independent if and only if P(A | B) = P(A) ∙ P(B).

K Means Clustering for 2013 NFL Stats

I just did some data mining on the stats from 2013. Nothing too exotic, just a bit of k-means clustering based on the 2013 defensive stats. http://www.pro-football-reference.com allows us to copy and paste the entire set of defensive stats at once for a whole year. So I just ran the stuff through my k means clustering program. My assumption is that the more clusters, the more the different defensive positions and defensive positions skill levels will distinguish themselves.

I realize though that simple clustering won’t necessarily know a LB from a DE or a S, but good stats are good stats so I just wanted to see who the program paired together. I’ll see if I can later find a better set of stats to play with.
Check it out

Nim Games

I enjoy going to schools to give talks. Generally, I try to focus these talks around mathematics that’s not generally taught in classrooms to try to connect to some of the inquisitive nature of the students. One of my favorite ways of doing this is through combinatorial games. These combinatorial games are generally two player sequential games (i.e. players alternate taking moves) where both players know all the information about the game before any moves are made. This is called a game of complete information. In addition, these games are deterministic, in that unlike a game of poker or dice there is no random element introduced into the game.

One of the most common ways of introducing students to combinatorial games is through the game of Nim (which is also called the Subtraction game). I’ve written a script here to help introduce this game. In the game of Nim, there are initially a number (p) of rocks in a pile. There is also an array of possible legal moves that each player can choose from on each turn. Players alternate removing a legal amount of stones from the pile until some player is unable to make a move, at which point the opposing player (the player who made the last move) is declared the winner.

So example a game of (1-2-3)-Nim could go as follows. Suppose initially there are 23 stones.

Stones Player Removed
23 1 3
20 2 1
19 1 2
17 2 2
15 1 1
14 2 3
11 1 2
9 2 1
8 1 3
5 2 2
3 1 1
0 2 1

In the above example, since player 2 removes the last stone, player 1 is unable to move so player 2 is declared the winner. Each move that a player makes is either removing 1, 2 or 3 stones as we initially stated in the rules of the game.

Because Nim is a game of perfect information, we know a lot about the game before any moves are made. In fact, we can determine who should win the game if it is played perfectly just by knowing the set of available moves and the number of stones in the pile. We can do this by considering a game with 0 stones and determining who would win this game (player 2), and increasing the number of stones in the pile one by one and at each new cell, determining who would be the winner. In this method, we can say that we are in a winning position if there is a feasible move that would put the opposing player into a losing position. Consider the following table for the (1-2-3)-Nim game:

Stones 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Winner 2 1 1 1 2 1 1 1 2 1 1 1 2 1 1 1 2 1 1 1 2 1 1 1

We can analyze this table as follows. With 0 stones, there are no moves that any player can make, but since player 1 goes first, they cannot make a move and lose the game. When there is 1, 2, or 3 stones, then player 1 can remove all the stones in the pile and in all cases player 2 will be looking at a situation where there are no stones to remove. When there are 4 stones, no matter how many stones player 1 removes, player 2 will be able to remove the remaining stones to ensure that player 1 is looking at a situation with no stones. We can repeat this process with any number of stones and we arrive at a table similar to the one listed above.

I have a script at my Nim games page where the set of possible moves and the number of stones in the pile are generated randomly and users get to play against a computer. Check it out and let me know what you think.

The Corral Puzzle

I just finished a script on The Corral Puzzle.

I really enjoy puzzles. Even more than simply solving puzzles, I like understanding the thought process behind the puzzle process and trying to generate similar puzzles that are difficult in their own right yet still solvable. So far I’ve written works on The Sudoku Puzzle, The Nonogram Puzzle, and The Shade the Cells Puzzle. Today, I want to write about the Corral Puzzle.

The story behind how I discovered this puzzle is twofold. Initially, a coworker brought a Corral puzzle to my desk to see if I could solve it. It took me a minute, but I was able to get to the correct solution. We did a few more afterwards and I wrote some scripts to try to create new instances of these puzzles, but for a while it remained in the “unfinished work” category. Then sometime earlier this year, I purchased a book “Brain Workout: Math & Logic Puzzles” by Dave Tuller and Michael Rios. I was astonished to see that the first puzzle in this book was the Corral puzzle (which is where I was able to learn the name of the puzzles, as before they were just some things presented to me by a friend). The book has about 20 challenging puzzles (all of which I have not solved), but I was left again wondering how they came up with the puzzles. Before going further, I’ll give the rules of the puzzles and an example of a problem with the corresponding solution.

We are given a square grid (say 4 rows and 4 columns, or 5 rows and 5 columns, or in general n rows and n columns). Inside the grid some of the cells have a number inside of it. The object of the puzzle is for the user to draw a bag (also known as a corral) around the numbers inside the grid. The limitation is that the numbers tell how many neighboring cells can be “seen” from the given cell, looking only horizontally and vertically in both directions without reaching an endpoint of the bag. There are no restrictions on the shape of the bag except that it actually represents a closed loop inside the grid.

Consider the following example:

In this example, we are told the following hints:

  1. Cell (1, 2) (row 1, column 2) can see 6 of its neighbors.
  2. Cell (3, 1) can see 3 of its neighbors.
  3. Cell (4, 2) can see 5 of its neighbors.
  4. Cell (4, 3) can see 5 of its neighbors.

From these hints, we can reach the following solution:

Our corral in the solution would be to place the contiguous block of blue (turquoise) cells into a bag. We can check that the solution satisfies the assumptions as follows:

  1. Cell (1, 2) can see cells (1, 1), (1, 2), (1, 3), (2, 2), (3, 2), and (4, 2), which is 6 cells.
  2. Cell (3, 1) can see cells (3, 1), (3, 2), and (3, 3), which is 3 cells.
  3. Cell (4, 2) can see cells (1, 2), (2, 2), (3, 2), (4, 2), and (4, 3), which is 5 cells.
  4. Cell (4, 3) can see cells (1, 3), (2, 3), (3, 3), (4, 3), and (4, 2), which is 5 cells.

So we can see that this solution satisfies our assumptions.

Right now, I’m able to work with this by using it as an instance of The Set Cover Problem. The set that we would like to cover is the set of cells inside the Corral. The possible subsets are the cells in the corral that can be viewed by each cell inside the bag. Although Set Cover is an NP-Complete problem, we can still find feasible solutions using a number of algorithms. Here, I generate a feasible solution using the greedy approach.

There is still a problem with ambiguity. Sometimes the initial puzzle generated will allow for multiple Corrals to fit the original description. This is true with the example given as the following is also an optimal solution.

There is a thin line between revealing enough cells to ensure a unique solution and revealing too many cells such that the problem becomes trivial to solve. I’m still working on that and I may update the script in the future if I decide that the ambiguity is too much to live with.

So, enough talk. Go and check out my script on The Corral Puzzle and let me know what you think.