9 Replies - 3314 Views - Last Post: 13 September 2012 - 03:54 AM

#1 healix  Icon User is offline

  • D.I.C Head

Reputation: 2
  • View blog
  • Posts: 67
  • Joined: 29-May 11

Big O notation

Posted 11 September 2012 - 10:48 PM

Can someone explain this to me in the most elementary way possible? I understand what it's for but what I don't understand is how to express code in big o notation also how do you tell if n2 + 5n in big O is o(n2)? do you just drop the lowest numbers and would these be right ?
100n + 5 = 0(n)
5n + 3n2 = o(n2)

I may sound like Im all over the map but I am. I'm trying to understand this and I don't understand how this is translated into terms of efficiency of algorithms.

two for loops equal n2. Why?

thanks in advance

Is This A Good Question/Topic? 0
  • +

Replies To: Big O notation

#2 karabasf  Icon User is offline

  • D.I.C Regular
  • member icon

Reputation: 202
  • View blog
  • Posts: 417
  • Joined: 29-August 10

Re: Big O notation

Posted 12 September 2012 - 01:25 AM

*
POPULAR

Not sure if this topic should be in here, but basically (citing wikipedia)

Quote

http://en.wikipedia..../Big_O_notation

  • If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.
  • If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) are omitted.


So for your two examples:
Ex 1)
100n + 5 = O(n), why?

"Rule 1": The largest growth rate is kept. As n (a linear term) is a bigger grow rate than 5 (a constant term), 5 drops. Thus we are left with 100n.
"Rule 2":
Then, as 100 is a constant which is not related with n, we are left with n, therefore the order is O(n)

Now I did the first example, try the second one for yourself ;)

As for the for loop example, let's consider the following (note that it is not always true that looping over two for loops results in O(n²), the performance depends on your algorithm)

for(int i= 0; i< n ; i++){
  //Some code
  for(int i= 0; i< n ; i++){
     //Some code
   }
}



So let's assume that the //Some code parts take a constant time (basically constant). Then for the outer for loop I have to run n times a constant. However, we also have an inner loop, which also runs n times. Thus for each step in the outer loop we have to run n steps in the inner loop. This continues till the outer loop reaches n.

Thus n*constant (outer loop and the some code part) * (n*constant) (the inner loop part) = c²*n²

Which results in O(n²) after applying the proposed rules.

Hope this helps you out ;)

This post has been edited by karabasf: 12 September 2012 - 01:27 AM

Was This Post Helpful? 5
  • +
  • -

#3 JackOfAllTrades  Icon User is offline

  • Saucy!
  • member icon

Reputation: 6110
  • View blog
  • Posts: 23,670
  • Joined: 23-August 08

Re: Big O notation

Posted 12 September 2012 - 03:17 AM

Moved to Computer Science; this is not a C/C++ question
Was This Post Helpful? 0
  • +
  • -

#4 sepp2k  Icon User is offline

  • D.I.C Lover
  • member icon

Reputation: 2153
  • View blog
  • Posts: 3,315
  • Joined: 21-June 11

Re: Big O notation

Posted 12 September 2012 - 08:10 AM

View Posthealix, on 12 September 2012 - 07:48 AM, said:

also how do you tell if n2 + 5n in big O is o(n2)? do you just drop the lowest numbers and would these be right ?
100n + 5 = 0(n)
5n + 3n2 = o(n2)


n^2 + 5n and 5n + 3n^2 are O(n^2), not o(n^2). Capital O. To be o(n^2) they'd need to be in O(n) or O(n log n) or something else that's less than n^2.

Quote

two for loops equal n2. Why?


As karabasf said, that really depends on the loops and their relation to each other. If you want to reason about the runtime of a loop, you need to consider two things:

a) How many times will the loop execute?
b) How much time will it take for each iteration of the body to execute?

If the number of iterations is x and the runtime of the body is y, then the total runtime of the loop is x*y.

If you have a loop of the form for(int i = 0; i<n; i++), then clearly it will execute n times (assuming neither i nor n are changed inside the loop body). So if the body is a simple operation that only takes a constant amount of time (let's call that amount c), the total runtime is c*n, which is in O(n) (because as karabasf pointed out, constant factors don't matter). If the body is another loop and that loop also goes from 0 to n, then the body of that loop will execute a total of n*n times. So if that body takes a constant amount of time, the total cost will be n*n*c, which is in O(n^2).

However if you have two loops that execute after each other (not inside of each other), then the total runtime is T1+T2 where n is the runtime of the first loop and T2 the runtime of the second. So assuming both loops take O(n) time, the total runtime is O(n+n) = O(n).

Then of course a single loop could also take quadratic time, if it iterates from 0 to n*n instead of from 0 to n.

And of course if you have two nested loops where the outer loop iterates from 0 to n and the inner loop iterates from 0 to 42 and the inner body takes constant time, the total runtime will be O(n).
Was This Post Helpful? 2
  • +
  • -

#5 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10816
  • View blog
  • Posts: 40,317
  • Joined: 27-December 08

Re: Big O notation

Posted 12 September 2012 - 08:34 AM

Let's look at this another way- with the definition of Big-O. Remember that f(n) is O(g(n)) if and only if there are constants C and k (that are positive integers) such that for all x >= k, |f(x)| <= C*|g(x)|. In other words, if you multiply g(n) by some constant, is there a point k where g(n) will be the upper bound of f(n).

So for #1:
f(n) = 100n + 5
g(n) = n

Is there some constant so that C*|g(n)| >= |f(n)| at some point k? What if C = 101? That would satisfy the definition of Big-O, hence why f(n) is O(n) here.

Also, KYA has a couple good tutorials on Big-O you should check out:
http://www.dreaminco...big-o-notation/
http://www.dreaminco...tation-part-ii/
Was This Post Helpful? 2
  • +
  • -

#6 mojo666  Icon User is offline

  • D.I.C Addict
  • member icon

Reputation: 356
  • View blog
  • Posts: 785
  • Joined: 27-June 09

Re: Big O notation

Posted 12 September 2012 - 10:47 AM

Quote

I may sound like Im all over the map but I am. I'm trying to understand this and I don't understand how this is translated into terms of efficiency of algorithms.

Learning and working with the formal definitions will make working with Big-O easier, but I will try to provide a general description of what we are trying to acheive with Big-O.

The whole point of this notation is to understand how long an algorithm will run in relation to varying input, particularly how efficient it is as the input grows. The reason for this is that input tends to grow (More customers means more records to store and bigger numbers to deal with, ect.) and we would like some confidence that our algorithms will still meet runtime requirements. "n" is our input. It could represent a size of a collection, value of a number, ect. The function inside the Big-O notation tells you how long you might expect your algorithm to take. We get the Big-O function by getting a rough count of how many commands are executed. Generally this is the number of times code is repeated multiplied by the number of commands per iteration. You can test this by putting a counter inside the code.

//pseudocode
n=5
count=0
for(i=0;i<n;i++)
{
  for(j=0;j<n;j++)
  {
    //code
    count++
  }
}
print count



In the above sample, "count" will always be equal to n^2. It shows you how many times "count++" was executed. The actual runtime will be the number of commands per loop multiplied by "count". However, the actual runtime is not that important. It will vary with "count", so we can just use count to represent our program's efficiency. Afterall, removing one line from whatever is in our "//code" will have little effect on our efficiency because any growth on the input will quickly eliminate the gain. For example, if we were running 4 commands inside the loop, the runtime would be 4 * 5^2 = 100. If we reduce it to 3 commands but the input doubles (setting n to 10), the runtime would be 3 * 10^2 = 300. Our improvement was easily eliminated by the growing input. For the same reason, we drop lower orders when looking at Big-O
n=5
count=0
for(i=0;i<n;i++)
{
  for(j=0;j<n;j++)
  {
    //code
    count++
  }
}

for(i=0;i<n;i++)
{
  //code
  count++
}
print count


In this example, count will always be n^2+n, so you might be tempted to say that it is O(n^2+n). However, the effects of the extra "n" are dwarfed by the effects of "n^2" so we tend to ignore lower orders the same way we ignore the exact command count. When n is 5, count is 5^2+5=30. If were were to improve the extra loop to only run in 1/2 the time, then the count would instead be 27 or 28. Now when the input doubles, the new count is 105. Again, the benefits of our improvement are quickly disappearing. For this reason, we only care about the most expensive parts of the algorithm when assesing runtime since that would be the main source of our woes if an algorithm is underperforming. I hope this offered some perspective into the utility of Big-O.
Was This Post Helpful? 3
  • +
  • -

#7 healix  Icon User is offline

  • D.I.C Head

Reputation: 2
  • View blog
  • Posts: 67
  • Joined: 29-May 11

Re: Big O notation

Posted 12 September 2012 - 03:10 PM

Thanks everyone. really appreciate it. So is it used more before writing an algorithm or more after? Meaning if I have a function and it works fine with small data and then that data increases and the efficiency starts to drop what do I do? re-work the function and how does Big O help me with that?

If I have a function that is O(nˆ2) and that becomes inefficient do I pick another algorithm to use based on its Big O notation?

Quote

Also, KYA has a couple good tutorials on Big-O you should check out:
http://www.dreaminco...big-o-notation/
http://www.dreaminco...tation-part-ii/


I checked these out last night but was still confused. The broad idea is there but I'm trying to understand its implementation.

This post has been edited by healix: 12 September 2012 - 03:11 PM

Was This Post Helpful? 0
  • +
  • -

#8 mojo666  Icon User is offline

  • D.I.C Addict
  • member icon

Reputation: 356
  • View blog
  • Posts: 785
  • Joined: 27-June 09

Re: Big O notation

Posted 12 September 2012 - 03:33 PM

Big-O evaluates how an algorithm performs, so if you haven't figured out an algorithm yet, then there's no way to assess. Big-O just provides the metric. Optimization techniques take education and practice. If an algorithm no longer meets the requirements, you may have to try to find a better algorithm. An algorithm is considered better if the Big-O function produces a smaller result for a given n. For example, if you implemented a bubble sort to sort an array, it will take O(n^2) where n is the number of items in the array. You would want to find an algorithm that is O(n) or O(n*log(n)). An example of such a sort would be merge sort which is O(n*log(n)). In this case, these two sort methods are completely different from the get go, but sometimes you will be able to just add a modification to the existing algorithm that will change it's efficiency and improve its Big-O performance. There are also times when an algorithm is the best you can get for the task required and it cannot be improved any further.
Was This Post Helpful? 0
  • +
  • -

#9 elgose  Icon User is offline

  • D.I.C Head

Reputation: 102
  • View blog
  • Posts: 228
  • Joined: 03-December 09

Re: Big O notation

Posted 12 September 2012 - 09:48 PM

Quote

If I have a function that is O(n2) and that becomes inefficient do I pick another algorithm to use based on its Big O notation?

Ideally, sure. But only when it truly becomes inefficient, meaning it becomes a bottleneck in your program. As a general rule, you want to avoid optimizing your code before you know what's really slowing things down. With that said, things aren't always as straight forward as simply "This algorithm has O(n), and this one has O(n^2), so obviously I use the first!"

For a basic example, consider array lists and linked lists. If you plan to add/delete data to the end or access random indices, an array list is far better because you can add to the end or access any index in constant (O(1)) time. But to add to the beginning or delete from the beginning, you run into O(n) time. Linked lists are basically the opposite... adding to the front or deleting from the front is O(1), while accessing, adding to the end, or deleting from the end are O(n).

So in this case, if I have to choose between just those two data structures I must look deeper into the type of transactions occur more often and if they occur in a while to act as a bottleneck.

Also keep in mind CS takes some liberties in the definition of Big-Oh. By definition, something that is O(n) is also O(n^2), because n^2 is an upper bound to n, and thus an upper bound to whatever you were considering. But in CS, we tend to mean Big-Oh as saying it's an upper bound, but it's also the closest upper bound (in other words, we usually mean Big-Omega rather than Big-Oh).
Was This Post Helpful? 1
  • +
  • -

#10 sepp2k  Icon User is offline

  • D.I.C Lover
  • member icon

Reputation: 2153
  • View blog
  • Posts: 3,315
  • Joined: 21-June 11

Re: Big O notation

Posted 13 September 2012 - 03:54 AM

View Postelgose, on 13 September 2012 - 06:48 AM, said:

Ideally, sure. But only when it truly becomes inefficient, meaning it becomes a bottleneck in your program. As a general rule, you want to avoid optimizing your code before you know what's really slowing things down. With that said, things aren't always as straight forward as simply "This algorithm has O(n), and this one has O(n^2), so obviously I use the first!"


That only applies up to a point though. If you can tell right away that an algorithm is, say, O(2^n), you shouldn't even bother implementing the algorithm, no matter how much simpler it is than the alternative. That's not premature optimization - it's just common sense.

Quote

Linked lists are basically the opposite... adding to the front or deleting from the front is O(1), while accessing, adding to the end, or deleting from the end are O(n).


Reading from and writing to the end of a list is O(1) - just like at the front. Only random access is O(n).

Note that linked lists can have significant constant overhead compared to arrays and that there are array-based data structures that allow appending to and deleting from the front in amortized O(1) time as well, so a linked list is very often not the best choice even if you do a lot of inserting and deleting at the front.

Quote

Also keep in mind CS takes some liberties in the definition of Big-Oh. By definition, something that is O(n) is also O(n^2), because n^2 is an upper bound to n, and thus an upper bound to whatever you were considering. But in CS, we tend to mean Big-Oh as saying it's an upper bound, but it's also the closest upper bound


In my experience it's usually non-computer scientists (and inexperienced students) that misuse the term that way.

Quote

(in other words, we usually mean Big-Omega rather than Big-Oh).


You mean Big-Theta.

This post has been edited by sepp2k: 13 September 2012 - 03:54 AM

Was This Post Helpful? 1
  • +
  • -

Page 1 of 1