Tag Archives: machine learning

ID3 Algorithm Decision Trees

ID3 Algorithm Page Image

As I grow LEARNINGlover.com, I’m always thinking of different ways to expose my own personality through the site. This is partially because it is easier for me to talk about subjects where I am already knowledgeable, but it is more-so being done to help make some of these algorithms and concepts I encode more understandable, and sometimes relating foreign concepts to everyday life makes them easier to understand.

Today, I’d like to write about decision trees, and the ID3 algorithm for generating decision trees in particular. This is a machine learning algorithm that builds a model from a training data set consisting of a feature vector and an outcome. Because our data set consists of an outcome element, this falls into the category of supervised machine learning.

The model that the ID3 algorithm builds is called a decision tree. It builds a tree based on the features, or columns of the data set with a possible decision corresponding to each value that the feature can have. The algorithm selects the next feature by asking “which feature tells me the most about our data set?” This question can be answered first by asking how much “information” is in the data set, and then comparing that result with the amount of information in each individual feature.

In order to execute this algorithm we need a way to measure both the amount the information in outcomes of the overall data set as well as how much each feature tells us about the data set. For the first, we will use entropy, which comes from the field of information theory and encoding. Entropy is based on the question of how many bits are necessary to encode the information in a set. The more information, the higher the entropy, and the more bits required to encode that information. Although we are not encoding, the correlation between high information and high entropy suits our purposes.

Entropy Formula

To understand how much each feature tells us about the outcomes of the data set we will build on the concept of entropy to define the information gain of a feature. Each feature has multiple options, so the dataset can be partitioned based on each possible value of this feature. Once we have this partition, we can calculate the entropy of each subset of the rows of data. We define the information gain of a feature as the sum over all possible outcomes of that feature can have of the entropy of that outcome multiplied by the probability of that outcome.

Information Gain Formula

To illustrate this algorithm, I decided to relate it to the question of whether we think of a character in a novel as a hero or villain. This is interesting because I try to read at least one book a month and as I’m reading, I often find myself asking this question about characters based on the traits of the characters as well as characters I’ve read about. In order to build an interactive script for this problem, I considered 25 possible character traits that could be present. A subset of these 25 character traits will be selected and a row will be generated grading a fictional character on a scale of 0 to 3 (0 meaning that they do not possess the trait at all, 3 meaning that the trait is very strong in their personality), and users will be asked whether they think a character with the given character traits should be listed as a hero or a villain. Then there is a button at the bottom of the script with the text “Build Tree” that executes the ID3 Algorithm and shows a decision tree that could be used to reach the set of decisions given by the user.

The possible features are:
Abstract, Adaptable, Aggressive, Ambition, Anxiety, Artistic, Cautious, Decisive, Honesty, Dutiful, Fitness, Intellect, Independent, Introverted, Lively, Open-minded, Orderly, Paranoid, Perfectionist, Romantic, Sensitive, Stable, Tension, Warmth and Wealthy

Once users select the option to build the tree, there will be several links outlining each step in the process to build this tree. These links will allow for users to expand the information relating to that step and minimize that information when done. Hopefully this will help users to understand each step more. I must say that as much fun as it has been writing this program, there were several questions when trying to explain it to others. Hopefully users get as much fun from using this tool as I had in creating it. As always feel free to contact me with any comments and or questions.

Ok, so here’s a link to the ID3 Algorithm Page. Please check it out and let me know what you think.

Hierarchical Clustering

Hierarchical Clustering algorithms give a nice introduction for computer science students to unsupervised machine learning. I say this because the bottom-up approach to Hierarchical clustering (which I have implemented here) is very similar to Kruskal’s algorithm for finding the minimum spanning tree of a graph.

In Kruskal’s algorithm, we begin by creating a forest, or a set of trees where each node is its own tree. The algorithm then selects the two trees that are closest together (closest being defined as the minimum cost edge between two distinct trees) and merges those trees together. This process of merging the closest two trees is then repeated until there is only one tree remaining, which is a minimum spanning tree of the graph.

Similarly, bottom-up hierarchical clustering of a group of points begins by saying that each point is its own cluster. Then the clusters are compared to one another to check if two clusters will be merged into one. Generally, there will be some stopping criteria, , saying that we do not want to merge two clusters together if their distance is greater than . So if the minimum distance between two clusters is less than we will proceed as in Kruskal’s algorithm by merging these two clusters together into one cluster. We repeat this process of merging the closest two clusters together until we find that the minimum distance between two clusters is greater than or equal to , in which case we can stop and the result is a partition of our data set into distinct clusters.

Hierarchical clustering is comparable to K-Means Clustering. Here are some differences between the two approaches:

  1. K-Means Clustering requires an initial number of desired clusters, while Hierarchical clustering does not.
  2. A run of K-Means Clustering will always give K clusters, whereas Hierarchical Clustering can give more or less, depending on our tolerance .
  3. K-Means can undo previous mistakes (assignments of an element to the wrong cluster), while Hierarchical Clustering cannot.

So, here is a link to my page on Hierarchical Clustering. Hope you enjoy.

A JavaScript Implentation of MapReduce’s WordCount

MapReduce WordCount

You can view a javascript implementation of the WordCound Program in Mapreduce at Javascript Implementation of Mapreduce WordCount

One of the big things in the world of Data Science and Cloud Computing is the map-reduce implementation of various algorithms. This is not always a straightforward procedure and so learning to think in terms of map-reduce implementations can be a challenging conversion from thinking in a functional programming frame of mind. In light of this I thought it would be convenient to try to help users “visualize” this concept. This is a challenging task because there are many concepts of cloud computing that I am unable to provide in this environment. However, just as many of the books on MapReduce provide pseudo-code on various implementations of algorithms in a Map-Reduce environment, I will attempt to show how data flows from the input to the mappers to the shuffle and sort phase to the reducers and finally to generate the output. I leave the users the task of actually putting these into the context of a Java MapReduce environment.

I want to first speak about the concept of (key, value) pairs which is a very important in MapReduce programming. I will speak about this in the context of a WordCount program. The purpose of a WordCount program is to count the number of occurrences of each word in a given file. First data is input to the mapper in (key, value) pairs. For our example, the key will be the line number of input (so each line of input will go to a different mapper) and the value will be the text present on that line. Once the mapper has the input, it will perform some operation on it and output data again in (key, value) pairs. In the WordCount example, the mappers will simply output each word that occurs as a new key on that line and the integer “1” as the associated value (note that a single mapper can output multiple (key, value) pairs).

One of the main things to understand in a MapReduce is that there are a number of Mappers running on a given input file and these Mappers cannot interact with one another. Suppose we have two different mappers, lets call them Mapper1 and Mapper2 that are each working on two different lines of input from a file. If both lines of input have the word “apple”, there is no way for Mapper2 to know that Mapper1‘s line of input also has this word. In this setting that’s perfectly fine because the Shuffle and Sort phase is where all the (key, value) pairs that were output by the mappers, compares the keys to one another and if they are equal to one another combines their respective values into a list of values. Unequal keys are sorted.

So if both Mapper1 and Mapper2 contained the word “apple” in their line of text, then the (key, value) pair (apple, 1) will occur twice. So the Shuffle and Sort phase will notice this and output the (key, value) pair (apple, {1, 1}).

Each reducer is then given a key and a list of values that were output by the mappers. The goal will be to perform some operation and again output data in (key, value) pairs. In the WordCount example, we will use what is known as the sumReducer. It gets this name because its job is simply to sum the values in the list of values and output the (key, value) pair that is the original key and this sum of values.

You can view a javascript implementation of this at Javascript Implementation of Mapreduce WordCount

Hidden Markov Models: The Baum-Welch Algorithm

Suppose you are at a table at a casino and notice that things don’t look quite right. Either the casino is extremely lucky, or things should have averaged out more than they have. You view this as a pattern recognition problem and would like to understand the number of ‘loaded’ dice that the casino is using and how these dice are loaded. To accomplish this you set up a number of Hidden Markov Models, where the loaded die are the latent (hidden) variables, and would like to determine which of these, if any is more likely to be using.

First lets go over a few things.

We will call each roll of the dice an observation. The observations will be stored in variables o1, o2, …, oT, where T is the number of total observations.

To generate a hidden Markov Model (HMM) we need to determine 5 parameters:

  • The N states of the model, defined by S = {S1, …, SN}
  • The M possible output symbols, defined by = {1, 2, …, M}
  • The State transition probability distribution A = {aij}, where aij is the probability that the state at time t+1 is Sj, given that the state at time t is Si.
  • The Observation symbol probability distribution B = {bj(k)} where bj(k) is the probability that the symbol k is emitted in state Sj.
  • The initial state distribution = {i}, where i is the probability that the model is in state Si at time t = 0.

    The HMMs we’ve generated are based on two questions. For each question, you have provided 3 different answers which leads to 9 possible HMMs. Each of these models has its corresponding state transition and emission distributions.

    • How often does the casino change dice?
      • 0) Dealer Repeatedly Uses Same Dice
      • 1) Dealer Uniformly Changes Die
      • 2) Dealer Rarely Uses Same Dice
    • Which sides on the loaded dice are more likely?
      • 0) Larger Numbers Are More Likely
      • 1) Numbers Are Randomly Likely
      • 2) Smaller Numbers Are More Likely
    How often does the casino change dice?
    Which sides on
    the loaded dice
    are more likely?
    (0, 0) (0, 1) (0, 2)
    (1, 0) (1, 1) (1, 2)
    (2, 0) (2, 1) (2, 2)

    One of the interesting problems associated with Hidden Markov Models is called the Learning Problem, which asks the question “How can I improve a HMM so that it would be more likely to have generated the sequence O = o1, o2, …, oT?

    The Baum-Welch algorithm answers this question using an Expectation-Maximization approach. It creates two auxiliary variables t(i) and t(i, j). The variable t(i) represents the probability of being in state i at time t, given the entire observation sequence. Likewise t(i, j) represents the joint probability of being in state i at time t and of being in state j at time t+1, given the entire observation sequence. They can be calculated by

    t(i) =
    (t(i) * t(i) )
    j = 1 to N(t(j) * t(j))

    and

    t(i, j) =
    (t(i) * ai, j * t+1(j) * bj(ot+1) )
    i’ = 1 to Nj’ = 1 to N(t(i’) * ai’, j’ * t+1(j’) * bj’(ot+1) )

    As you can see, these are a direct result of calculations of from the Forward algorithm and from the Backwards algorithm. Once we have calculated these variables, we can update the parameters of the model as follows:

    i = 1(i)

    i,j = t = 1 to T-1(t(i))
    t = t to T-1 (t(i, j))

    // [b bar]_{j, k} = Sigma_{t = 1 to T, o_t = o_k} gamma_{t, j} / Sigma_{t = 1 to T} gamma_{t, j}, 1 <= j <= N, 1 <= k <= M

    j(ok) =
    t = 1 to T-1, ot = ok t(j)
    t = 1 to T-1 t(j)

    We can iterate this procedure a finite number of times or until it converges. This will generate a new model, = {N, , , , }.

    There is more on this example at LEARNINGlover.com: Hidden Marokv Models: The Baum-Welch Algorithm.

    Some further reading on Hidden Markov Models:

Hidden Markov Models: The Viterbi Algorithm

I just finished working on LEARNINGlover.com: Hidden Marokv Models: The Viterbi Algorithm. Here is an introduction to the script.

Suppose you are at a table at a casino and notice that things don’t look quite right. Either the casino is extremely lucky, or things should have averaged out more than they have. You view this as a pattern recognition problem and would like to understand the number of ‘loaded’ dice that the casino is using and how these dice are loaded. To accomplish this you set up a number of Hidden Markov Models, where the loaded die are the latent variables, and would like to determine which of these, if any is more likely to be using.

First lets go over a few things.

We will call each roll of the dice an observation. The observations will be stored in variables o1, o2, …, oT, where T is the number of total observations.

To generate a hidden Markov Model (HMM) we need to determine 5 parameters:

  • The N states of the model, defined by S = {S1, …, SN}
  • The M possible output symbols, defined by = {1, 2, …, M}
  • The State transition probability distribution A = {aij}, where aij is the probability that the state at time t+1 is Sj, given that the state at time t is Si.
  • The Observation symbol probability distribution B = {bj(k)} where bj(k) is the probability that the symbol k is emitted in state Sj.
  • The initial state distribution = {i}, where i is the probability that the model is in state Si at time t = 0.

    The HMMs we’ve generated are based on two questions. For each question, you have provided 3 different answers which leads to 9 possible HMMs. Each of these models has its corresponding state transition and emission distributions.

    • How often does the casino change dice?
      • 0) Dealer Repeatedly Uses Same Dice
      • 1) Dealer Uniformly Changes Die
      • 2) Dealer Rarely Uses Same Dice
    • Which sides on the loaded dice are more likely?
      • 0) Larger Numbers Are More Likely
      • 1) All Numbers Are Equally Likely
      • 2) Smaller Numbers Are More Likely
    How often does the casino change dice?
    Which sides on
    the loaded dice
    are more likely?
    (0, 0) (0, 1) (0, 2)
    (1, 0) (1, 1) (1, 2)
    (2, 0) (2, 1) (2, 2)

    One of the interesting problems associated with Hidden Markov Models is called the Decoding Problem, which asks the question “What is the most likely sequence of states that the HMM would go through to generate the sequence O = o1, o2, …, oT?

    The Viterbi algorithm finds answers this question using Dynamic Programming. It creates an auxiliary variable t(i) which has the highest probability that the partial observation sequence o1, …, ot can have, given that the current state is i. This variable can be calculated by the following formula:

    t(i) = maxq1, …, qt-1 p{q1, …, qt-1, qt = i, o1, …, ot | }.

    1(j) = jbj(o1), for 1 j N.

    Once we have calculated t(j) we also keep a pointer to the max state. We can then find the optimal path by looking at arg max 1 j N T(j) and then backtrack the sequence of states using the pointer.

    There is more on this example at LEARNINGlover.com: Hidden Marokv Models: The Viterbi Algorithm.

    Some further reading on Hidden Markov Models:

Hidden Markov Models: The Backwards Algorithm

I just finished working on LEARNINGlover.com: Hidden Marokv Models: The Backwards Algorithm. Here is an introduction to the script.

Suppose you are at a table at a casino and notice that things don’t look quite right. Either the casino is extremely lucky, or things should have averaged out more than they have. You view this as a pattern recognition problem and would like to understand the number of ‘loaded’ dice that the casino is using and how these dice are loaded. To accomplish this you set up a number of Hidden Markov Models, where the number of loaded die are the latent variables, and would like to determine which of these, if any is more likely to be using.

First lets go over a few things.

We will call each roll of the dice an observation. The observations will be stored in variables o1, o2, …, oT, where T is the number of total observations.

To generate a hidden Markov Model (HMM) we need to determine 5 parameters:

  • The N states of the model, defined by S = {S1, …, SN}
  • The M possible output symbols, defined by = {1, 2, …,M}
  • The State transition probability distribution A = {aij}, where aij is the probability that the state at time t+1 is Sj, given that the state at time t is Si.
  • The Observation symbol probability distribution B = {bj(k)} where bj(k) is the probability that the symbol k is emitted in state Sj.
  • The initial state distribution = {i}, where i is the probability that the model is in state Si at time t = 0.

The HMMs we’ve generated are based on two questions. For each question, you have provided 3 different answers which leads to 9 possible HMMs. Each of these models has its corresponding state transition and emission distributions.

  • How often does the casino change dice?
    • 0) Dealer Repeatedly Uses Same Dice
    • 1) Dealer Uniformly Changes Die
    • 2) Dealer Rarely Uses Same Dice
  • Which sides on the loaded dice are more likely?
    • 0) Larger Numbers Are More Likely
    • 1) All Numbers Are Randomly Likely
    • 2) Smaller Numbers Are More Likely
How often does the casino change dice?
Which sides on
the loaded dice
are more likely?
(0, 0) (0, 1) (0, 2)
(1, 0) (1, 1) (1, 2)
(2, 0) (2, 1) (2, 2)

One of the interesting problems associated with Hidden Markov Models is called the Evaluation Problem, which asks the question “What is the probability that the given sequence of observations O = o1, o2, …, oT are generated by the HMM . In general, this calculation, p{O | }, can be calculated by simple probability. However because of the complexity of that calculation, there are more efficient methods.

The backwards algorithm is one such method (as is the forward algorithm). It creates an auxiliary variable t(i) which is the probability that the model has generated the partially observed sequence ot+1, …, oT, where 1 t T. This variable can be calculated by the following formula:

t(i) = j = 1 to N(t+1(j) * aij * bj(ot+1))

We also need that T(i) = 1, for 1 i N.

Once we have calculated the t(j) variables, we can solve the evaluation problem by p{O | } i = 1 to N1(i)

There is more on this example at LEARNINGlover.com: Hidden Marokv Models: The Backwards Algorithm.

Some further reading on Hidden Markov Models:

Understanding Bayes’ Theorem

An Image of Bayes' Theorem Script
An Image of Bayes’ Theorem Script

I’ve finished a script that helps understand Bayes’ Theorem.

If we have a set of mutually exclusive (aka non-overlapping) sets Bi for i {0, 1, 2, …, n} for some integer n, then the union of these sets forms a sample space. Lets call the sample space S. Suppose that we also have some set (also known as an event) A which is also a subset of S. Bayes’ Theorem considers the probability that one of these mutually exclusive events (one of the Bi‘s) caused the observed event (A).

This probability can be calculated by the formula

Pr(Bj | A) =
Pr(Bj) Pr(A | Bj)
Pr(Bi) Pr(A | Bi)

The theorem helps us determine the the probability of the event Bj given A, or in more plain English, the probability that the event Bj is the cause that gives rise to the observed event A. The numerator is given by the product of of the probability of the causal event (Pr(Bj) times the conditional probability of the observed event given the causal event (Pr(A | Bj)). This numerator could be replaced by its equivalent statement of the set A Bj. Likewise, the denominator the sum (over all the causal events) of the probaility of each causal event times the conditional probability of the observed event given that particular causal event. Each term in this denominator could be replaced b its equivalent staetment A Bi, which when summed give the total probability of A because each pair of the Bj‘s is mutually exclusive. So we are able to replace the probability of A with Pr(Bi) Pr(A | Bi) because of the fundamental law of probability.

An example that would use Bayes’ Theorem is analyzing the results of an election. The set of mutually exclusive events could be membership in a political party (Democrat, Republican, or Independent). The observed event could be the election of an individual. And the conditional distributions could be the percentage of each party that voted for this individual. If we want to calculate how significant each party was to the individual’s election, we’d use Bayes’ Theorem.

The script I’ve written to help understand Bayes’ Theorem works as follows:
– A set of mutually exclusive sets is randomly generated (the number of sets also varies). These sets are called Bi for i (0, …, n}.
– A set A is randomly generated from the union of the Bi‘s.
– A table is displayed showing:
Pr(Bi) for each i on line 1.
Pr(A | Bi) for each i on line 2.

– The user is given the option to select which of the mutually exclusive sets they would like to use to calculate the probability that this set caused the event A.
– Once a set is chosen, the user clicks the “Calculate Conditional” button and Bayes’ Theorem gives the result.
– If the “show work” checkbox was checked, then the steps used in this calculation are also shown.
– All work is done using fractions to give an idea of where the numbers come from.

Other Blogs that have covered this topic:
Better Explained
Bayes’ Theorem-qed

K-Means Clustering

This is how the data looks before the clustering algorithm is run. This is how the data looks after the clustering algorithm is run.

I have now uploaded my K-Means Clustering Script. The script generates a set of random numbers (as ordered pairs) and asks the user how many clusters we should divide the numbers into and a maximum number of iterations to go through before we stop.

One of the largest problems that we face today is understanding data. Before we even get to the point of trying to interpret what the data means and making decisions based on that data there is often a problem with the general amount of data. Clustering algorithms seek to solve this problem by defining some notion of similarity and using that notion to group the data into sets or ‘clusters’, where two elements belong to the same cluster if they are considered similar. Once elements are placed into clusters, we can analyze this (generally smaller) set of clusters instead of the entire data set, which should help in understanding the data.

Finding an exact solution to this problem is computationally difficult. Instead, we can approximate a solution rather quickly using the k-means clustering algorithm. This algorithm attempts to separate a given data set into a user specified (k) number of groups. The k signifies the number of clusters that we will generate. The algorithm works by initially selecting k elements of the data to serve as the “center” of each cluster. Every element of the data is then compared to each cluster center and assigned to the cluster with the closest cluster center. Once every element in the data is assigned to a cluster, the cluster centers may have changed. So the next step is to measure the elements inside each cluster and determine the new cluster center. The process of assigning elements to (new) clusters and determining (new) cluster centers is repeated until either no element changes cluster or we have reached some maximum number of iteration that the user specifies that they do not want to exceed.

K-Means Clustering can be thought of as an algorithm in the area of unsupervised machine learning. Machine learning is a field of artificial intelligence that focuses on computer programs that have the ability to learn without being explicitly programmed. Unsupervised machine learning seeks to make interpret data without any knowledge of what a “correct” interpretation is. In comparison, supervised machine learning algorithms are useful for data that has been separated into categories. These algorithms generally divide the data into a training set and a test set and seek to produce a function that agrees with the results on the training set.

New Years is a LEARNINGlover Thing!

Starting this web site has served as the perfect opportunity to unwind. In particular, two blogs I wrote recently have served different purposes. The first was “The Degrees of Consciousness of a Black Nerd“, where I spoke about many of the things I think about being who I am and relating the the (somewhat unique) set of people that I communicate with on a daily basis. The other was what I’ve been working on since mid December. Its the blog entry I wrote on Sudoku and the Sudoku program that I wrote last month using the Dancing Links Algorithm. Since originally writing that, I’ve updated the program with a lot of Sudoku problems, as well as two types of “hints”. One generates the “possibilities matrix” which basically just shows what is possible for each cell. The other scans the possibilities matrix and searches for isolated cells (cells where some number can only go in one row/column/subgrid). Both those additions were extremely fun and provided a nice opportunity to program in my spare time.

So I’ve been thinking about these two things and how much I enjoyed the two, but for different reasons. The Degrees of Consciousness of a Black Nerd brought much attention on facebook where the idea was both accepted and rejected and I was able to explain the ideas further and hear similar stories from others who had similar experiences. Sudoku, on the other hand hasn’t generated as much conversation. A few friends have told me that they liked the program, but I don’t know of too many people outside of myself using it. That’s OK. I didn’t really create the program for publication, moreso because I enjoy Sudoku and took it as a challenge to write a program to solve a puzzle.

That being said, I’ve been thinking about some other things that I would like to do – hopefully they’ll be more than just my opinion, but have some programming, operations research, or mathematical context to them as well. But here’s a list of things that I want to write about in the near future.

  • Connections between math and football (or sports in general). Anybody who knows me knows that I’m a huge sports fan. At other sites, I’ve posted stuff on QB rating systems and the flaws/inaccuracies in them. Part of me would like to look further into this stuff and either do a comparison, create a new rating system, or just try to understand (explain) different scouting metrics, particularly for QBs.
  • I wrote the Sudoku program, but that’s a well studied problem so it was easy to find research that helped me to understand other stuff. I also studied some problems in Ramsey Theory that can be represented by exact covering algorithms and it would be interesting to try to represent these as exact cover problems and to try to use the Dancing Links algorithm to solve these problems as well, maybe to find a comparative analysis to benchmarks.
  • One of the things I’ve picked up on lately is machine learning. While I’ve added flash cards on Bayesian Networks, I would like to add some programs on things like k-means clustering.
  • I added my sorting algorithms a few weeks ago, but would like to also add something on data structures (arrays, linked lists, trees, heaps, hash tables, etc). In undergrad this was called the class that weeded people out of computer science majors, so doing something on this type of stuff I think could be helpful to those who want to understand it better.
  • I have been in the process of writing a linear programming (Simplex) implementation for a while. I would like to get back to that to allow for a solver that can do at least simple algorithms.
  • I’ve written a few drafts that connect math to different areas that I’m interested in (music, sports, philosophy, religion, etc), but I need to find a better way to present the stuff because as they’re currently stated, this stuff can easily be misinterpreted.

The beautiful thing about this site is that I’m not constrained by any advisor or boss or deadlines. Its more of a how am I feeling right now kind of a thing and these are the things I’m feeling right now. So this list is kind of my “New Years Resolutions” for LEARNINGlover.com, and while I reserve the right to change my priorities any time I feel that something else deserves my attention more, these are the things I’m planning to spend time on in the next few weeks.