The cutting optimization problem belongs to the class of Nondeterminist Polynomial Complete (NP-Complete) problems [1]. Other problems in this class are the Hamiltonian path, Travelling Salesman, Subset sum, Clique, Independent set, Graph colouring etc. All these problems have been deeply analyzed by a huge number of researchers, but no polynomial-time algorithm was discovered for them. This has a direct consequence over the running time and the quality of the optimization.
A polynomial-time algorithm is that one whose running time is bounded by a polynomial function of its input size. For instance, if we have n = 1000 pieces to cut and the cutting algorithm would have the complexity O(n2), then the running time would have been directly and linear proportional to 10002 which is (106) units of time. Assuming that our computers can perform 109 operations per second, the cutting optimization algorithm would run in less than a fraction of a second. Sadly, this is not the case for the cutting optimization problem. There is no such fast algorithm for solving it.
The only perfect algorithm for solving the cutting optimization problem is an exponential one. An exponential algorithm will run in an exponential amount of time (2n, 3n, n! – where n is the number of pieces to be optimized). Thus, even for small instances (lets say 1000 pieces) an exponential algorithm will run as follows:
- if the complexity is 2n , then the total number of operations is 21000 which can be approximated by 10300. Knowing that our computers can perform 109 operations / second we need 10291 second to run the algorithm. One year has about 108 seconds. This means that our algorithm would run in 10283 years. This is a huge value compared to the age of the universe which is only 109 years.
- if the complexity is 3n , then the total number of operations is 31000 which can be approximated by 10477.
- if the complexity is n! , then the total number of operations is 1000! which can be approximated by 102567.
|
These algorithms run in an impressive number of years. Even if we put all computers in the world to solve the problem in parallel we still don’t get a significant improvement in speed.
This is why another possibility (which is employed by our software too) is to use heuristics or approximation algorithms. A heuristic is an algorithm which is fast and returns a good solution (often the best one) of the problem. However,there is no guarantee that the obtained solution is the optimal one.
An important parameter of the software is the OptimizationLevel. This will basically tell how many configurations are explored before the best found solutions is outputted. If you set the OptimizationLevel to very low value you will obtain a solution very fast. But the quality of the solution might be not so good. If you set the OptimizationLevel to very high value you will obtain a good solution but not so fast. Thus, one must employ a trade-off between the quality of the solutions and the running time.
References
[1]. Garey, M.R., Johnson D.S., Computers and Intractability: A Guide to NP-completeness, Freeman & Co, San Francisco, USA, 1979.