if N is small, then, exhaustive search (brute-force approach) can do the work, as the number of subsets in the input set could be properly managed by a modern computer in a reasonable time, i.e:

- N=5 = 2^5-1 = 31

- N=10 = 2^10-1 = 1023

- N=20 = 2^20-1 = 1.048.575

- N=21 = 2^30-1 = 1.073.741.823

However, when N becomes large the number of subsets increase exponentially thus they cannot be properly managed in a reasonable amount of time, or to say in other words, it could take thousand of years to process (following a exhaustive search process)

- N=100 = 2^100 = 1,26765E+30

- N=1000 = 2^1000 = 1,0715E+301

Let's say there is a mathematical way to process the a subset sum problem instance in a reasonable amount of time (in polynomial) with large input variable, (large means >100>200>400>800>1600>3200>6400>12800>25600 input variables).

**How will be the expected measured time execution to demonstrate it runs in polynomial time with the given input variables?**

For instance, each time you duplicate the input variables it takes 3 times the previous run:

T(2N) = 3T(N) = O(N*log(3))

.. or better than this, each time you duplicate the input variables it takes 2 times the previous run:

T(2N) = 2T(N) = O(N*log(2))

For the shake of clarity let's say there is no trick in the input value range. The input values range must be in 0 to 2^32, that is 0 to 4294967295 (integer value).

Feel free to correct the above statements, the formulas, the O notation or whatever you believe could be wrong

John