Tag Archives: encryption

Lets Learn About XOR Encryption

One of the more common things about this generation is the constant desire to write up (type) their thoughts. So many of the conversations from my high school days were long lasting, but quickly forgotten. Today’s generation is much more likely to blog, tweet, write status updates or simply open up a notepad file and write up their thoughts after such a conversation.

When we feel that our thoughts are not ready for public eyes (maybe you want to run your idea by the Patent and Trademark Office before speaking about it) we may seek some form of security to ensure that they stay private. An old fashioned way of doing this was to write in a diary and enclosed it within a lock and key. The mathematical field of encryption also tries to grant privacy by encoding messages so that only people with the necessary information can read them.

The type of encryption I want to speak about today is called XOR encryption. It is based on the logical operation called “exclusive or” (hence the name XOR). The exclusive or operation is true between two logical statements if exactly one of the two statements is true, but not both statement. This can be represented with the following truth table

Input 1 Input2 XOR Result
T T F
T F T
F T T
F F F

XOR Encryption is particularly useful in this day and age because we every character we type is understood by the computer as a sequence of zeros and ones. The current standard encoding that is used is Unicode (also known as UTF-8). Under this encoding the letter ‘a’ is represented as the binary string ‘01100001’. Similarly every letter, number and special character can be represented as its own binary string. These binary strings are just an assignment of numbers to these characters so that we can to help represent them in the computer. The numbers can the be thought of in base 10, which is how we generally think about numbers, or in base 2 which is how computers generally work with numbers (or a number of other ways). The way we would use these binary strings in encoding is first by translating a text from human-readable text to machine readable text via its binary string. For example, the word “Invincible”, we would get the following binary strings:

Letter Unicode in base 10 Unicode in base 2
I 73 01001001
n 110 01101110
v 118 01110110
i 105 01101001
n 110 01101110
c 99 01100011
i 105 01101001
b 98 01100010
l 108 01101100
e 101 01100101

To encrypt the message we need a key to encode the message and will simply perform an XOR operation on the key and every character in the string. Similarly, do decrypt the message we perform XOR operation on the key and every character in the encoded message. This means that the key (much like a normal key to a diary) must be kept private and only those whom the message is to be shared between have access to it.

Here is a link to the script where you can check out XOR Encrpytio. Try it out and let me know what you think.

Visualizing Huffman Coding Trees

Huffman Coding
Image of Huffman Tree

Here is a link to a script I finished to help visualize the Huffman Coding Algorithm.

What would you do if you wanted to transfer a message, say one written in English but you only had a limited set of characters. Suppose these characters are 0 and 1. The only way of doing this is by writing some type of procedure to transfer from our 26 letter alphabet to the 0-1 binary alphabet. There are several ways of developing these encoding functions, but we will focus on those that attempt to translate each individual character into a sequence of 0s and 1s. One of the more popular such codes today is the ASCII code, which maps each character to a binary string (of 0s and 1s) of length 8. For example, here is the ASCII code for the upper and lower case alphanumeric characters.

ASCII English
01100001 a
01100010 b
01100011 c
01100100 d
01100101 e
01100110 f
01100111 g
01101000 h
01101001 i
01101010 j
01101011 k
01101100 l
01101101 m
01101110 n
01101111 o
01110000 p
01110001 q
01110010 r
01110011 s
01110100 t
01110101 u
01110110 v
01110111 w
01111000 x
01111001 y
01111010 z

What you notice from this is that each of these encodings beings with “011”, which amounts to a lot of wasted space. ASCII code doesn’t care about this because the fixed length of each binary string allows for easy lookup of particular characters (i,e, you can start almost anywhere in the string with your decomposition as long as you start at a multiple of 8).

But what if we were interested in minimizing the total bits used by the encoded string? This is where the Huffman coding algorithm gains its fame. Unlike the ASCII coding scheme, Huffman codes assign shorter codes to the more frequently occurring characters in your string. Huffman was able to prove this tactic would guarantee the shortest possible encoding.

The Huffman Coding procedure operates as follows:
1. Input string to be encoded -> Input
2. For each character in the input string, calculate the frequency of that character (i.e. the number of times it occurs in the input)
3. Sort the array of characters in the input by their decreasing frequencies
4. Place the array of characters into the queue with each one represented by a node.
5. While there are two or more nodes remaining in the queue.
6. Remove the nodes representing the two characters with the lowest frequency from the queue.
7. Create a node which points to the two nodes just removed from the queue (node -> left points to one node; node -> right points to the other).
8. Insert this new node into the queue, with the frequency equal to the sum of the frequencies of the nodes it points to.
9. If the length of the queue is greater than 1, then goto 5.

Other Blogs that have covered this topic:
Techno Nutty
billatnapier