Tag Archives: np

My Review of “The Golden Ticket: P, NP, and the Search for the Impossible”

The Golden Ticket Image

I came home from work on Wednesday a bit too tired to go for a run and a bit too energetic to sit and watch TV. So I decided to pace around my place while reading a good book. The question was did I have a good book to read. I had been reading sci-fi type books earlier this month and wanted a break from that, so I looked in my mailbox and noticed that I had just received my copy of “The Golden Ticket: P, NP, and the Search for the Impossible“. At the time, I was of the mindset that I had just gotten off of work and really didn’t want to be reading a text book as if I was still at work. But I decided to give it a try and at least make it through the first few pages and if it got to be overwhelming, I’d just put it down and do something else.

About three hours later I was finishing the final pages of the book and impressed that the author (Lance Fortnow) was able to treat complexity theory the same way that I see physics professors speaking about quantum physics and the expanding universe on shows like “the Universe” and “Through the Wormhole with Morgan Freeman” where complex topics are spoken about with everyday terminology. It wouldn’t surprise me to see Dr. Fortnow on shows like “The Colbert Report” or “The Daily Show” introducing the topics in this book to a wider audience.

Below is the review I left on Amazon.

I really enjoyed this book. It was a light enough read to finish in one sitting on a weeknight within a few hours, but also showed its importance by being able to connect the dots between the P = NP problem to issues in health care, economics, security, scheduling and a number of other problems. And instead of talking in a "professor-like" tone, the author creates illustrative examples in Chapters 2 and 3 that are easy to grasp. These examples form the basis for much of the problems addressed in the book.

This is a book that needed to be written and needs to be on everyone's bookshelf, particularly for those asking questions like "what is mathematics" or "what is mathematics used for". This book answers those questions, and towards the end gives examples (in plain English) of the different branches of mathematics and theoretical computer science, without making it read like a text book.

Also, here is a link to the blog that Lance Fortnow and William Gasarch run called “Computational Complexity”, and here is a link to the website of the book, “The Golden Ticket: P, NP, and the Search for the Impossible”

What is a “Hard” Problem?

Throughout our lives, we are introduced to a wide variety of problems. Naturally we tend to think of some problems as more difficult than others. If you were doing the exercises at the end of a chapter in a book, the author generally tends to begin with those exercises that can be completed in a few minutes or directly from the material in the chapter. The exercises in the end tend to be more time consuming and possibly require outside resources. In this sense, these authors tend to compare the difficulty of problems by the amount of time they expect the average reader to take to solve them.

This is a popular way to look at difficulty – in terms of how difficult it is for not only a single person to solve a problem (say you or I), but how long it would take our peers as well as yourself to solve the problem. But how can we measure this? Should I assume that because a problem takes me two years to solve that it would take the average person the same amount of time? Or, if a problem has never been solved, is that because of the difficulty of the problem or could it be for another reason like because no-one had thought of the problem before?

In particular, these questions were being asked in the world of computer science in the 1960s and 1970s. It was around this time that Stephen Cook came up with the concept being able to compare how difficult one problem was in comparison to another. A problem A can be “reduced” to another problem B if a way to solve problem B quickly also gives way to a way to solve problem A quickly.

This concept of reducibility provided a foundation for this concept of difficulty as it was no longer enough to say “I think this problem is hard”, or “the average person would take just as long as me to solve this problem”. No, instead, Cook considered problems that many thought were difficult and showed that the satisfiability problem (the question of whether a given boolean formula contains an assignment to the variables that satisfies all the clauses) could be used to show that all quickly verifiable decision problems (also called problems in NP), where the instances with a “yes” answer have short proofs of the fact that the answer is indeed “yes” could be reduced to this problem.

This meant that the satisfiability problem was at least as hard as every one of these quickly verifiable decision problems, so if this problem could be solved quickly, then every one of these quickly verifiable decision problems could also be solved quickly. So the satisfiability problem is at least as hard as every problem in this class, NP. We call such problems that have this property NP-Hard. Decision Problems that are NP-Hard are called NP-Complete. Richard Karp later published a paper that proposed 21 NP-Complete problems, one of which was the Knapsack Problem I spoke about last time.

Garey Johnson NP-Complete Image

This concept of NP-Complete gives light to the image that became famous in the book by Garey and Johnson “Computers and Intractability: A Guide to the Theory of NP-Completeness”. It shows that the concept of difficulty still builds on the ability of our peers to solve a given problem. But by introducing a class of “hard” problems, and a litmus test for inclusion into that class, researchers could now consider new problems and determine their difficulty by comparing it to known hard problems.

(a quick note. I’ve used the term “quickly” here, but the formal phrase that I mean by that is that the problem is solvable by an algorithm whose worse case running time is bounded by a polynomial on the input parameters of the problem)

Other Blogs that have covered this topic:
Explorations
To Imaging, and Beyond!