Tag Archives: binary

Binary Puzzles

As you can probably tell, I’m a big fan of puzzles. On one hand you can say that a good puzzle is nothing but particular instance of a complex problem that we’re being asked to solve. What exactly makes a problem complex though?

To a large extent that depends on the person playing the puzzles. Different puzzles are based on different concepts and meant to highlight different concepts. Some puzzles really focus on dynamic programming like the Triangle Sum Puzzles or the Unidirectional TSP Puzzles.

Other puzzles are based on more complicated problems, in many cases instances of NP-complete problems. Unlike the puzzles mentioned above, there is generally no known optimal strategy for solving these puzzles quickly. Some basic examples of these are ones like Independent Set Puzzles, which just give a random (small) instance of the problem and ask users to solve it. Most approaches involve simply using logical deduction to reduce the number of possible choices until a “guess” must be made and then implementing some form of backtracking solution (which is not guessing since you can form a logical conclusion that if the guess you made were true, you reach either (a) a violation of the rules or (b) a completed puzzle).

One day a few months back i was driving home from work and traffic was so bad that i decided to stop at the store. While browsing the books, I noticed a puzzle collection. Among the puzzles I found in that book were the Range Puzzles I posted about earlier. However I also found binary puzzles.

Filled Binary puzzles are based on three simple rules
1. No the adjacent cells in any row or column can contain the same value (so no 000 or 111 in any row or column).
2. Every row must have the same number of zeros and ones.
3. Each row and column must be unique.

There is a paper from 2013 stating that Binary Puzzles are NP Complete. There is another paper that discusses strategies involved in Solving a Binary Puzzle

Once I finished the puzzles in that book the question quickly became (as it always does) where can I get more. I began writing a generator for these puzzles and finished it earlier this year. Now i want to share it with you. You can visit the examples section to play those games at Binary Puzzles.

Below I will go over a sample puzzle and how I go about solving it. First lets look at a 6 by 6 puzzle with some hints given:

 01   
0 10  
11 0  
 1  0 
   0  
1  1 0

We look at this table and can first look for locations where we have a “forced move”. An obvious choice for these moves wold be three adjacent cells in the same row or column where two have the same value. A second choice is that when we see that a row or column has the correct number of zeros or ones, the remaining cells in that row or column must have the opposite value.

So in the above puzzle, we can see that the value in cells (2, 2) and (2, 5) must also be a 0 because cells (2, 3) and (2, 4) are both 1. Now we see that column 2 has 5 of its 6 necessary values, and three 0’s. So the last value in this column (2, 6) must be a 1 in order for there to be an equal number of 0s and 1s.

For some easier puzzles these first two move types will get you far enough to completely fill in all the cells. For more advanced puzzles though, this may require a little more thorough analysis. 

As always, check it out and let me know what you think. 

Learn About “the Other” Algebra

When I visit family for the holidays, the topic of my being a mathematician always seems to come up, and there’s always a child in the family struggling with maths, and when I ask the subject of their struggles the word “algebra” is always the culprit. I’ll save for another post my ideas on how this subject should be taught in high school and some of the main problems facing students.

I want to concentrate this post on a topic that few outside the mathematical world know about, but which many inside this world (myself included) hold dearly – the topic of modern or abstract algebra. I refer to this as “the other” algebra because a general conversation about the word “algebra” will generally revolve around concepts such as systems of equations, slopes, intercepts, intersection, rise-over-run, point-slope, and other terminology that limits algebra to a specific domain (the set of real or complex numbers) while at the same time ignoring the underlying beauty associated with this area.

I wrote previously about the area of set theory and the beauty associated with taking math out of the scope of a basic number line and into a much more undefined space. Abstract algebra is a continuation of set theory where in addition to our set, we have a (binary) operation defined on any two elements of this set. The inclusion of this binary operation allows us to consider several different structures based on the properties that this binary operation holds.

The structures I’d like to write about today are called groups. A group is a set along with an operation (or function) defined on any two elements of the set with the following properties:
– It is closed. This means that any time we run this function on two elements on the set, the function gives us a member of the set. In mathematical terms, for all a, b in the set A, f(a, b) must also be a member of A.
– There is an identity element. An identity element is defined as an element where is we include it in the binary operator with any other element, the operator will always return the other element. So if the element i is the identity element, then f(i, a) = a and f(a, i) = a for any other a in the set A. Any group must have an identity element.
– Every element has an inverse. Inverse elements are based on the identity element. What the inverse says is that for every element, there is a way to use the binary operator to get to the identity element. So for all elements a in the set A, there is an element b in the set A such that f(a, b) = i, where i is the identity element.
– The binary operator is associative. I described the associative property when I discussed the functions and relations of set theory. A function is associative if the way we group things (aka associate them) doesn’t matter. This means that for any elements a, b, and c of the set A, f(f(a, b), c) must be the same as f(a, f(b, c)).

If these four properties hold for a set A and a binary function f, then we say that the pair (A, f) is a group. We will generally use a common notation such as a · b, or a * b or simply ab to represent f(a, b).

Another important concept in group theory is the idea of a Cayley table. These are similar to multiplication tables that we drew out when we were first learning our “times tables”. For a group with n elements, we form a table with n rows and n columns. Each element of the group is written out to the left of each row and above each column (so really we can think of it as an n+1 by n+1 table with the first row and column being descriptive rows). Each cell of the table is the binary operator applied to the two elements indicated by the row and column (with an understanding of whether we have row before column or vice versa). Obviously, we can only do this for finite groups as we cannot write out all the elements of an infinite set.

The script I’ve added is a tester to allow users to input the information for a possible group (size, name of each element, and a Cayley table) and with this information the user is informed whether or not it forms a group. If it does not form a group, the reasons why it does not form one are also given. There are also some sample groups given to give insight into this area.

Visualizing Huffman Coding Trees

Huffman Coding
Image of Huffman Tree

Here is a link to a script I finished to help visualize the Huffman Coding Algorithm.

What would you do if you wanted to transfer a message, say one written in English but you only had a limited set of characters. Suppose these characters are 0 and 1. The only way of doing this is by writing some type of procedure to transfer from our 26 letter alphabet to the 0-1 binary alphabet. There are several ways of developing these encoding functions, but we will focus on those that attempt to translate each individual character into a sequence of 0s and 1s. One of the more popular such codes today is the ASCII code, which maps each character to a binary string (of 0s and 1s) of length 8. For example, here is the ASCII code for the upper and lower case alphanumeric characters.

ASCIIEnglish
01100001a
01100010b
01100011c
01100100d
01100101e
01100110f
01100111g
01101000h
01101001i
01101010j
01101011k
01101100l
01101101m
01101110n
01101111o
01110000p
01110001q
01110010r
01110011s
01110100t
01110101u
01110110v
01110111w
01111000x
01111001y
01111010z

What you notice from this is that each of these encodings beings with “011”, which amounts to a lot of wasted space. ASCII code doesn’t care about this because the fixed length of each binary string allows for easy lookup of particular characters (i,e, you can start almost anywhere in the string with your decomposition as long as you start at a multiple of 8).

But what if we were interested in minimizing the total bits used by the encoded string? This is where the Huffman coding algorithm gains its fame. Unlike the ASCII coding scheme, Huffman codes assign shorter codes to the more frequently occurring characters in your string. Huffman was able to prove this tactic would guarantee the shortest possible encoding.

The Huffman Coding procedure operates as follows:
1. Input string to be encoded -> Input
2. For each character in the input string, calculate the frequency of that character (i.e. the number of times it occurs in the input)
3. Sort the array of characters in the input by their decreasing frequencies
4. Place the array of characters into the queue with each one represented by a node.
5. While there are two or more nodes remaining in the queue.
6. Remove the nodes representing the two characters with the lowest frequency from the queue.
7. Create a node which points to the two nodes just removed from the queue (node -> left points to one node; node -> right points to the other).
8. Insert this new node into the queue, with the frequency equal to the sum of the frequencies of the nodes it points to.
9. If the length of the queue is greater than 1, then goto 5.

Other Blogs that have covered this topic:
Techno Nutty
billatnapier

Learn About Binary Search Trees

Binary Search Tree Script
An Image of My Binary Search Tree Script

I just finished a script that shows the properties of the binary search tree data structure.

These data structures are organized such that the data lies in “nodes” and each node connects directly to up to two new nodes. These new nodes are called the children of the node, and the original node is called the parent. Because there are up to two children, we designate one child as the “left” child, and the other as the “right” child with the properties that the value stored in the left child is less than the value in the parent, which in itself is less than the value of its right child. If a parent has less than two children, then one (or both) of its children are given the value of null.

The insert and delete procedures need to make sure that they keep the elements of a binary search tree in sorted order.
To insert into a BST, we must first find the correct location where the new element will be placed. This means comparing the value of the new element to the current head of the tree, resulting in three possible outcomes.
if the head is null, then insert the new node at the current position because there is no subtree to compare it to.
if the value of the new element is less than the value at the head node, run the insert procedure on the left child of head.
if the value of the new element is greater than the value at the head node, run the insert procedure on the right child of head.

Similarly, the remove procedure for a binary search tree must first find the element to be removed. Once that element is found, there are three cases depending on the type of node we are dealing with.
if the node has no children, then simply remove the node from the tree.
if the node has only one child (either a left child or a right child), then have the parent of the node point to the child of the node (thus bypassing the node itself).
if the node has two children, then we have two options, either replace the node with the minimum value of the right subtree or the maximum value of the left subtree. The nodes that have these minimum and maximum values will have at most one child because by definition a value less than the minimum value in a right subtree would be a left child and thus would be less than the minimum value, contradicting the meaning of a minimum value. Because these nodes have at most one child, we can now use the procedures above to remove these nodes from the tree.
Because a binary search tree is different than a standard array, there are different methods for viewing the its contents. Three common such methods are preorder, inorder, and postorder traversal.
Preorder traversal visits the nodes of a binary search tree in the order (node), (left child), (right child).
Inorder traversal visits the nodes of a binary search tree in the order (left child), (node), (right child).
Postorder traversal visits the nodes of a binary search tree in the order (left child), (right child), (node).

We are also interested in the depth of a tree, which amounts to the amount of layers or levels of the tree. This can be computed by counting the longest path from the root of the tree to a leaf node (a node with no children) in the tree.

Other Blogs that have covered this topic:
Stoimen’s Web Log

Examples of the Binary Search Algorithm

I have published code that shows examples of the Binary Search Algorithm.

In order for this algorithm to be applicable, we need to assume that we’re dealing with a sorted list to start. As a result, instead of proceeding iteratively through each item in the list, the binary search algorithm continually divides the list into two halves and searches each half for the element.

It can be shown that the maximum number of iterations this algorithm requires is equivalent to the number of times that we need to divide the list into halves. This is equivalent to a maximum number of iterations along the order of log2(n), where n is the number of items in the list.