12 Replies - 1564 Views - Last Post: 29 July 2013 - 02:54 PM

#1 Dylsauce  Icon User is offline

  • D.I.C Head

Reputation: 1
  • View blog
  • Posts: 98
  • Joined: 30-June 13

How do you find the time complexity of an algorithm?

Posted 27 July 2013 - 05:33 PM

I only am using big-O notation. I understand the basics. Like,

If something is independent of the number of elements, like this = that, it is O(1)

In a simple loop, like
for(int i = 0; i<n i++){
this = new thing();
}


I simply look at the number of times the loop will be executed in terms of n, i.e. O(n)

or maybe it was like:

for(int i = 0; i<n i += .5){
this = new thing();
}



then it would loop thru 2n times, which as n tends to infinity, becomes basically O(n)

but I cannot figure out how to solve the ones that are nlogn, or log base 2 n, o2 2^n etc.

Can anyone provide an algorithm for assessing the algorithms?

Is This A Good Question/Topic? 0
  • +

Replies To: How do you find the time complexity of an algorithm?

#2 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10439
  • View blog
  • Posts: 38,658
  • Joined: 27-December 08

Re: How do you find the time complexity of an algorithm?

Posted 27 July 2013 - 06:52 PM

Some good tutorials:
Mine on Computational Complexity
KYA's Big-O Part I tutorial
KYA's Big-O Part II tutorial.
Was This Post Helpful? 1
  • +
  • -

#3 farrell2k  Icon User is offline

  • D.I.C Lover
  • member icon

Reputation: 823
  • View blog
  • Posts: 2,532
  • Joined: 29-July 11

Re: How do you find the time complexity of an algorithm?

Posted 27 July 2013 - 07:42 PM

I wish I were smart like Mac. :)
Was This Post Helpful? 0
  • +
  • -

#4 Dylsauce  Icon User is offline

  • D.I.C Head

Reputation: 1
  • View blog
  • Posts: 98
  • Joined: 30-June 13

Re: How do you find the time complexity of an algorithm?

Posted 27 July 2013 - 08:36 PM

To be honest the one thing i feel these don't explain well is the log case.. he says:
"
Example:
for(int i = 0; i < n; i *= 2) {
    cout << i << endl;
}



There are n iterations, however, instead of simply incrementing, 'i' is increased by 2*itself each run. Thus the loop is log(n)." -KYA

But I don't get how that premise warrant the conclusion..
What does a 2 have to do with log? His second tutorial has some stuff involving graphs and so forth but I dont find it helped either..
I dont think that code will do anything because 0*2==0 always but lets say i = 1 to start.
So if,
n = 1: 0 time
n = 2: 1 time
n = 3: 2
n = 4: 2
n = 5: 3

I just don't quite get how you go from the expression to actual logs?
Was This Post Helpful? 0
  • +
  • -

#5 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10439
  • View blog
  • Posts: 38,658
  • Joined: 27-December 08

Re: How do you find the time complexity of an algorithm?

Posted 27 July 2013 - 08:42 PM

To clarify, i = 1 would have to be the base case to avoid an infinite loop. Consider n = 32. So if i = 1 and i *= 2 on each iteration, we have iterations at i = 1, 2, 4, 8, 16, 32. That is log_2(n) iterations. If i *= 3 on each iteration, then it would be log_3(n) iterations.

When you multiply, you cover more ground faster. So you hit fewer numbers before hitting the condition.

Think of it similarly to binary search. We know that its runtime complexity is O(log(n)). On each iteration, it partitions the list in half. The loop is doing the same thing essentially.
Was This Post Helpful? 0
  • +
  • -

#6 Dylsauce  Icon User is offline

  • D.I.C Head

Reputation: 1
  • View blog
  • Posts: 98
  • Joined: 30-June 13

Re: How do you find the time complexity of an algorithm?

Posted 28 July 2013 - 08:36 AM

Ahh, I see. So, when I check for logs how do I evaluate the base? I mean, I can eliminate the other possibilities O(n) O(n^2) etc. Is it best to just write out the growth of i some n and try to eyeball the base? And how do I check? If it is log_2(n) doesn't that mean: log_2(32) = 5 iterations?
Was This Post Helpful? 0
  • +
  • -

#7 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10439
  • View blog
  • Posts: 38,658
  • Joined: 27-December 08

Re: How do you find the time complexity of an algorithm?

Posted 28 July 2013 - 08:45 AM

The base is the partition. If i is being multiplied by k on each iteration, then it is log_k(n). The base of the log isn't important in the algorithm's complexity though as it can be treated as a constant because of the change of base formula.
Was This Post Helpful? 1
  • +
  • -

#8 Dylsauce  Icon User is offline

  • D.I.C Head

Reputation: 1
  • View blog
  • Posts: 98
  • Joined: 30-June 13

Re: How do you find the time complexity of an algorithm?

Posted 28 July 2013 - 10:50 AM

ah okay, thanks. and how about something that grows with O(2^n)
public int fib(int n) {
    if (n <= 1) return n;
    else return fib(n - 2) + fib(n - 1);
}


so, for every number from 0-n, the fib() in the else part is called for twice for every number from 0-(n-2) and once for (n-1). So, something like n*(the expression for the else statement) which seems like n+n.. but then each fib there is n*(the expression for the else statement).. how does one go from that method, to O(2^n)?
Was This Post Helpful? 0
  • +
  • -

#9 cfoley  Icon User is offline

  • Cabbage
  • member icon

Reputation: 1948
  • View blog
  • Posts: 4,048
  • Joined: 11-December 07

Re: How do you find the time complexity of an algorithm?

Posted 28 July 2013 - 11:12 AM

You call fib and it makes 2 calls. Each of them makes 2 calls, etc. Each time you call fib, the number of calls that is being made doubles, hence O(2^n).

My experience of big O is that it's not necessarily the complexity class that makes it easy to spot, it's how often you have seen that kind of code. Without much practice, you'll be able to spot linear and quadratic loops. log and exponential algorithms take a bit longer (if only because we tend not to write so many of them) . Then there are the recursive versions of them.

Even when you have them pretty much sorted, be prepared for code that you have to go back to first principles to analyse. I spent a whole day a couple of years ago determining the complexity class of what turned out to be a linear algorithm.
Was This Post Helpful? 1
  • +
  • -

#10 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10439
  • View blog
  • Posts: 38,658
  • Joined: 27-December 08

Re: How do you find the time complexity of an algorithm?

Posted 28 July 2013 - 11:43 AM

The Fibonacci algorithm you presented is O( ((1 + sqrt(5))/2)n), which is still O(2n). Let's derive T(n), the runtime complexity function. We know that T(n) = 2 (comparison + return) for n <= 1. Otherwise, T(n) = T(n-1) + T(n-2). Notice that this is the Fibonacci recurrence, save for the base case. The roots are still (1 +- sqrt(5))/2. The constant coefficients are a tad different due to the base case(s), but they are dropped when evaluating Big-O.

Also, I'm going to move this to Computer Science.
Was This Post Helpful? 0
  • +
  • -

#11 mojo666  Icon User is offline

  • D.I.C Addict
  • member icon

Reputation: 352
  • View blog
  • Posts: 770
  • Joined: 27-June 09

Re: How do you find the time complexity of an algorithm?

Posted 29 July 2013 - 09:54 AM

Quote

but I cannot figure out how to solve the ones that are nlogn, or log base 2 n, o2 2^n etc.

Can anyone provide an algorithm for assessing the algorithms?


There is no solid algorithm to produce the complexity. You are ultimately using proof techniques and different problems will require different techniques. Sometimes it only requires simple counting, other times I have had to revert to calculus patterns to derive the appropriate function.

Complexity is related to the number of times code repeats. If you can isolate a bunch of commands that are O(1), you can say those commands take 1 unit of time to run. Then, count how often those commands repeat and identify the function that accurately represents the the time taken for a particular sized input.

There's a simple rule of thumb for the functions you listed.

If everytime I multiply the input by a constant c where c>1, and the runtime increments by constant k, then it is O(log(n)).

So, if for input of size n the algorithm completes in 5 units of time, a logrithmic algorithm of base 2 will take 6 units of time for input of size 2n, 7 units for 2*2n, 8 units for 2*2*2n and so on.

If everytime I increment the input by one, and the runtime gets multiplied by constant c>1, then it is O(c^n)

So, if for input size n it takes 5 units of time, an algorithm O(2^n) would take 2*5 units to process input size n+1, 2*2*5 units for n+2, 2*2*2*5 units for n+3 and so on.

A short oversimplified explanation: If I double the input and the resulting runtime increases by 1, it is O(log(n)). If I increment the input by 1 and the resulting runtime doubles, it is O(2^n).

This relies on limited experiments instead of mathematical proof, so I wouldn't exclusively rely on this to produce completely accurate runtimes. However, it can be helpful in at least getting an idea of what runtime you are looking for.
Was This Post Helpful? 1
  • +
  • -

#12 Dylsauce  Icon User is offline

  • D.I.C Head

Reputation: 1
  • View blog
  • Posts: 98
  • Joined: 30-June 13

Re: How do you find the time complexity of an algorithm?

Posted 29 July 2013 - 02:21 PM

What about Horner's method? What is its purpose? I understand it some kind of expansion technique for polynomials - but how does this relate to my time complexity calculation? I find it difficult to see the relationship between limits the time complexity of a function..

This post has been edited by Dylsauce: 29 July 2013 - 02:22 PM

Was This Post Helpful? 0
  • +
  • -

#13 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10439
  • View blog
  • Posts: 38,658
  • Joined: 27-December 08

Re: How do you find the time complexity of an algorithm?

Posted 29 July 2013 - 02:54 PM

Horner's method looks like a standard problem to practice algorithm analysis, rather than a tool to help you evaluate other algorithms.

Big-O is defined: f(n) is O(g(n)) if and only if there exist constants C and k in the set of positive integers such that |f(x)| <= C * |g(x)|, for all x >= k (x is a positive integer).

Big-Omega is defined similarly as Big-O, just invert the signs.

A limit is defined: for all e > 0, there exists d > 0 such that for all x (d, e, x are all reals): |x - c| < d -> |f(x) - f©| < e.

Big-O and Big-Omega can pretty easily be written as limits. There is actually a limit test. Consider lim(n-->infinity) f(n)/g(n) = L. If 0 < L < infinity, then f(n) = Theta(g(n)). If L = 0, then f(n) = O(g(n)). If L = infinity, then f(n) = Omega(g(n)).
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1