Tag Archives: polynomial

Polynomial Arithmetic

Polynomial Arithmetic Image

With students beginning to attend classes across the nation, I wanted to focus the site towards some of the things they’re going to be addressing. This latest page publicize some scripts that I wrote to help with polynomial arithmetic. Originally I wrote these as homework exercises for a class in programming, but I have found them useful ever since – both in teaching mathematics classes like college algebra, which spends a lot of attention on polynomials, and in my research life. Its funny (and sad) the number of simple errors that a person (mathematician or not) can make when performing simple arithmetic, so I found it very useful to have a calculator more advanced than the simple scientific calculators that are so easily available.

I’m not going to spend a lot of time discussing the importance of polynomials, or trying to justify their need. I will bring up some problems that I’d like to address in the future, that deal with polynomials. The first is finding the roots of the characteristic polynomial of a matrix. This is useful in research because these roots are the eigenvalues of the matrix and can give many properties of the matrix. There are also some data analysis tools like Singular Value Decomposition and Principal Component Analysis where I will probably build out from this initial set of instances.

The user interface for the scripts I’ve written generate two polynomials and ask the user what is to be done with those polynomials. The options are to add the two, subtract polynomial 2 from polynomial 1, multiply the two, divide polynomial 1 by polynomial 2, and divide polynomial 2 by polynomial 1. There is also the option to make the calculations more of a tutorial by showing the steps along the way. Users who want new problems can generate a new first or second polynomial and clear work.

For addition and subtraction, the program works by first ensuring that both polynomials have the same degree. This can be achieved by adding terms with zero coefficient to the lower degree polynomial. Once this has been accomplished, we simply add the terms that have the same exponent.

For multiplication, the program first builds a matrix A, where the element ai, i+j on row i and column i+j of the matrix A is achieved by multiplying the ith term of the first polynomial by the jth term of the second polynomial. If an was not given a value in the matrix, then we put a value of zero in that cell. Once this matrix is formed, we can sum the columns of the matrix to arrive at the final answer.

The division of two polynomials works first by dividing the first term of the numerator by the first term of the denominator. This answer is then multiplied by the denominator and subtracted from the numerator. Now, the first term in the numerator should cancel and we use the result as the numerator going froward. This process is repeated as long as the numerator’s degree is still equal to or greater than the denominator’s degree.

Check out the latest page on polynomial arithmetic and let me know what you think.

My Review of “The Golden Ticket: P, NP, and the Search for the Impossible”

The Golden Ticket Image

I came home from work on Wednesday a bit too tired to go for a run and a bit too energetic to sit and watch TV. So I decided to pace around my place while reading a good book. The question was did I have a good book to read. I had been reading sci-fi type books earlier this month and wanted a break from that, so I looked in my mailbox and noticed that I had just received my copy of “The Golden Ticket: P, NP, and the Search for the Impossible“. At the time, I was of the mindset that I had just gotten off of work and really didn’t want to be reading a text book as if I was still at work. But I decided to give it a try and at least make it through the first few pages and if it got to be overwhelming, I’d just put it down and do something else.

About three hours later I was finishing the final pages of the book and impressed that the author (Lance Fortnow) was able to treat complexity theory the same way that I see physics professors speaking about quantum physics and the expanding universe on shows like “the Universe” and “Through the Wormhole with Morgan Freeman” where complex topics are spoken about with everyday terminology. It wouldn’t surprise me to see Dr. Fortnow on shows like “The Colbert Report” or “The Daily Show” introducing the topics in this book to a wider audience.

Below is the review I left on Amazon.

I really enjoyed this book. It was a light enough read to finish in one sitting on a weeknight within a few hours, but also showed its importance by being able to connect the dots between the P = NP problem to issues in health care, economics, security, scheduling and a number of other problems. And instead of talking in a "professor-like" tone, the author creates illustrative examples in Chapters 2 and 3 that are easy to grasp. These examples form the basis for much of the problems addressed in the book.

This is a book that needed to be written and needs to be on everyone's bookshelf, particularly for those asking questions like "what is mathematics" or "what is mathematics used for". This book answers those questions, and towards the end gives examples (in plain English) of the different branches of mathematics and theoretical computer science, without making it read like a text book.

Also, here is a link to the blog that Lance Fortnow and William Gasarch run called “Computational Complexity”, and here is a link to the website of the book, “The Golden Ticket: P, NP, and the Search for the Impossible”

What is a “Hard” Problem?

Throughout our lives, we are introduced to a wide variety of problems. Naturally we tend to think of some problems as more difficult than others. If you were doing the exercises at the end of a chapter in a book, the author generally tends to begin with those exercises that can be completed in a few minutes or directly from the material in the chapter. The exercises in the end tend to be more time consuming and possibly require outside resources. In this sense, these authors tend to compare the difficulty of problems by the amount of time they expect the average reader to take to solve them.

This is a popular way to look at difficulty – in terms of how difficult it is for not only a single person to solve a problem (say you or I), but how long it would take our peers as well as yourself to solve the problem. But how can we measure this? Should I assume that because a problem takes me two years to solve that it would take the average person the same amount of time? Or, if a problem has never been solved, is that because of the difficulty of the problem or could it be for another reason like because no-one had thought of the problem before?

In particular, these questions were being asked in the world of computer science in the 1960s and 1970s. It was around this time that Stephen Cook came up with the concept being able to compare how difficult one problem was in comparison to another. A problem A can be “reduced” to another problem B if a way to solve problem B quickly also gives way to a way to solve problem A quickly.

This concept of reducibility provided a foundation for this concept of difficulty as it was no longer enough to say “I think this problem is hard”, or “the average person would take just as long as me to solve this problem”. No, instead, Cook considered problems that many thought were difficult and showed that the satisfiability problem (the question of whether a given boolean formula contains an assignment to the variables that satisfies all the clauses) could be used to show that all quickly verifiable decision problems (also called problems in NP), where the instances with a “yes” answer have short proofs of the fact that the answer is indeed “yes” could be reduced to this problem.

This meant that the satisfiability problem was at least as hard as every one of these quickly verifiable decision problems, so if this problem could be solved quickly, then every one of these quickly verifiable decision problems could also be solved quickly. So the satisfiability problem is at least as hard as every problem in this class, NP. We call such problems that have this property NP-Hard. Decision Problems that are NP-Hard are called NP-Complete. Richard Karp later published a paper that proposed 21 NP-Complete problems, one of which was the Knapsack Problem I spoke about last time.

Garey Johnson NP-Complete Image

This concept of NP-Complete gives light to the image that became famous in the book by Garey and Johnson “Computers and Intractability: A Guide to the Theory of NP-Completeness”. It shows that the concept of difficulty still builds on the ability of our peers to solve a given problem. But by introducing a class of “hard” problems, and a litmus test for inclusion into that class, researchers could now consider new problems and determine their difficulty by comparing it to known hard problems.

(a quick note. I’ve used the term “quickly” here, but the formal phrase that I mean by that is that the problem is solvable by an algorithm whose worse case running time is bounded by a polynomial on the input parameters of the problem)

Other Blogs that have covered this topic:
Explorations
To Imaging, and Beyond!