Knapsack Problems

Knapsack Image

I have added a script to help users understand the knapsack problem as well as some attempts at solving it.

To help understand this problem, I want you to think about a common situation in many people’s lives. You have a road trip coming up today and you’ve overslept and are at risk of missing your flight. And to top matters off, you were planning to pack this morning but now do not have the time. You quickly get up and begin to get ready. You grab the first bag you see and quickly try to make decisions on which items to take. In your head you’re trying to perform calculations on things you’ll need for the trip versus things that you can purchase when you get there; things that you need to be able to have a good time versus things you can do without. And to top matters off, you don’t have time to look for your ideal luggage to pack these things. So you have the additional constraint that the items you pick must all fit into this first bag you found this morning.

The situation I described above is a common problem. Even if we ignore the part about the flight, and just concentrate on the problem of trying to put the most valuable set of items in our bag, where each item has its own value and its own size limitations, this is a problem that comes up quite often. The problem is known (in the math, computer science and operations research communities) as the knapsack problem. It is known to be difficult to solve (it is said to be NP-Hard and belongs to a set of problems that are thought to be the most difficult problems within its class). Because of this difficulty, this problem has been well studied.

What I provide in my script are two approaches to solving this problem. The first is a greedy approach, which selects a set of items by iteratively choosing the item with the highest remaining value to size ratio. This approach solves very fast, but can be shown to give sub-optimal solutions.

The second approach is a dynamic programming approach. This algorithm will solves the problem by ordering the items 0, 1, …, n and understanding that in order to have the optimal solution on the first i items, the optimal solution must have been first selected on the fist i-1 items. This algorithm will optimally solve the problem, but it requires the computation of many sub-problems which causes it to run slowly.

Update (4/2/2013): I enjoy this problem so much that I decided to implement two additional approaches to the problem: Linear Programming and Backtracking.

The Linear Programming approach to this problem comes from the understanding that the knapsack problem (as well as any other NP-Complete problem) can be formulated as an Integer Program (this is a mathematical formulation where we seek to maximize a linear objective function subject to a set of linear inequality constraints with the condition that the variables take on integer values). In the instance of the knapsack problem we would introduce a variable xi for each item i; the objective function would be to maximize the total value of items selected. This can be expressed as a linear objective function by taking the sum of the products of the values of each item vi and the variable xi; the only constraint would be the constraint saying that all items fit into the knapsack. This can be expressed as a linear inequality constraint by taking the sum of the products of the weights of each item wi and the variable xi. This sum can be at most the total size of the knapsack. In the integer programming formulation, we either select an item or we do not. This is represented in our formulation by allowing the variable xi = 1 if the item is selected, 0 otherwise.

The LP relaxation of an integer program can be found by dropping the requirements that the variables be integer and replacing them with linear equations. So in the case of the knapsack problem, instead of allowing the variables to only take on values of 0 and 1, we would allow the variables to take on any value in the range of 0 and 1, i.e 0 <= xi <= 1 for each item i. We can then solve this LP to optimality to get a feasible solution to the knapsack problem.The second knapsack approach I implemented today is through backtracking. Similar to the Dynamic Programming approach to this problem, the backtracking approach will find an optimal solution to the problem, but these solutions generally take a long time to compute and are considered computationally inefficient. The algorithm I implemented here first orders the item by their index, then considers the following sub-problems for each item i "What is the best solution I can obtain with this initial solution?". To answer this question, the algorithm begins with an initial solution (initially, the empty set) and a set of unchecked items (initially, all items) and recursively calls itself on sub-problems with an additional item as a part of the initial solution and with this item removed from the unchecked items.So check out my knapsack problem page. I think its a good way to be introduced to the problem itself, as well as some of the techniques that are used in the fields of mathematics, computer science, engineering and operations research.

Other Blogs covering this topic:
Journey to the Four Point Oh

Similar Posts:

2 thoughts on “Knapsack Problems”

  1. I think I came up with basically the same alogirthm as hasan, but it doesn’t require preprocessing for range max queries, and maybe it will be simpler to understand (or not).The idea is to maintain subarrays which are multipermutations, not necessarily maximal – which I’ll call “green” subarrays – and subarrays surrounding each green one which must be contained in any larger multipermutation – I’ll call these “yellow” subarrays. Keep growing these (occasionally turning yellow subarrays green when there is an actual multipermutation) until the whole array is covered either yellow or green, and then the largest green one is the answer.Call the array A with entries A[1]..A[n].I’ll first present the case where there is only one “1” in the array. Note that any multipermutation must contain this “1”, and also that it is itself a length-1 multipermutation. Say this “1” occurs at index k in the array. Then, for i=2..n, store the following:-L[i], the largest integer jk such that A[j]=i-RM[i] := max(A[k..R[i]])Clearly, these four arrays can be computed in linear time in a single pass left and right from index k.We will maintain a single “green” subarray, which is itself a subarray of a single “yellow” subarray. Initially, these are both just the single “1” at A[k]. Also maintain:-The largest integer in the yellow subarray, initially 1; call it a.-The smallest integer not present in the yellow subarray, initially 2; call it bNow repeat the following steps:-(Grow yellow subarray) Compare LM[b] and RM[b]. If LM[b] is smaller, then grow the yellow subarray to the left to index L[b] – which must not already be in the yellow subarray by the definition of b. Update a and b as each new element is added (updating b in constant time can be done with some more bookkeeping). If RM[b] is smaller, do the opposite. If LM[b]=RM[b], then grow in both directions.-(Check for multipermutation) If b>a, change the whole yellow subarray to green and expand in both directions to include all adjacent integers a.This process terminates after at most n steps (any number larger than n can’t be part of a multipermutation), and the amortized cost over all steps is O(n), since the yellow array only grows. At the end, the green array is the largest multipermutation.For the more general case (more than one “1” in the array), we start initially with each “1” being its own green (and yellow) array, and grow them all simultaneously, merging the yellow and then green arrays when they meet. Constructing the arrays becomes a bit trickier to do in linear time, but is possible by using the fact that any multipermutation on a sub-interval of length m can only contain integers m.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.