e) for (int j = 0; j < n; j++)

sum += j;

a) int sum = 0;

int n = 100000;

f) for (int j = 1; j < n; j *= 2)

sum += j;

# Big-O Algorithm Efficiency

## Express the tightest upper bound of the following loops in big-O notat

Page 1 of 1## 4 Replies - 6066 Views - Last Post: 10 February 2007 - 08:55 AM

##
**Replies To:** Big-O Algorithm Efficiency

### #2

## Re: Big-O Algorithm Efficiency

Posted 19 January 2007 - 05:30 PM

We are not here to do your homework for you,

Ask a clear question, and post relevant code that you have

Ask a clear question, and post relevant code that you have

### #3

## Re: Big-O Algorithm Efficiency

Posted 19 January 2007 - 05:41 PM

Express the tightest upper bound of the following loops in big-O notation. This is the question. I just learned about big-O and still don't really get about it. Can someone just give me an answer of the question that I asked with an explanation of it so I can understand more about this

### #4

## Re: Big-O Algorithm Efficiency

Posted 10 February 2007 - 08:28 AM

Big O is reletivly easy to understand heres an overview.

consider the function 3x^4 + 2x^3 + x^2 + 3

we would say that this function is Big-O of x^4

when the constant is 9.

a function is Big O when F(x) <= Cg(x) where C is some constant and g(x) is a function that is always bigger than F(x)

Generally when you do Big O you raise all coefficients to the same power (ie x^4 in this case) and then simplify.

for programming purposes any statement that takes one line (ie assignment, printing) is said to be BigO(1) because it takes a negligible amount of time. functions like for loops USUALLY take n times so its said to be BigO(n). if you have an if loop then calculate the Big O for the true statement and the false statement and take the maximum value. If you ahve nested for loops then generally it is said that they are BigO(n^2)

hope this helps some.

consider the function 3x^4 + 2x^3 + x^2 + 3

we would say that this function is Big-O of x^4

when the constant is 9.

a function is Big O when F(x) <= Cg(x) where C is some constant and g(x) is a function that is always bigger than F(x)

Generally when you do Big O you raise all coefficients to the same power (ie x^4 in this case) and then simplify.

for programming purposes any statement that takes one line (ie assignment, printing) is said to be BigO(1) because it takes a negligible amount of time. functions like for loops USUALLY take n times so its said to be BigO(n). if you have an if loop then calculate the Big O for the true statement and the false statement and take the maximum value. If you ahve nested for loops then generally it is said that they are BigO(n^2)

hope this helps some.

### #5

## Re: Big-O Algorithm Efficiency

Posted 10 February 2007 - 08:55 AM

nice explanation mattman059.

there have been many threads on big-O, most including code which looks almost exactly like the one's you have posted. Big-O is not a complicated notation, it simply tries to determine a "worst case runtime" for the code. A for loop has a worst case if every element is searched, now determine the number of elements that could be searched and there is your answer.

*Coeffecients are always dropped, eg:

1) n/2 = 1/2 * n = n -> O(n), linean time

2) 2n = 2 * n = n -> O(n), still linear

3) 5 = 5 * 1 = 1 -> O(1), constant time

it is interesting to note that even though an algorithm in n/2 and 2n appear to have large differences in runtime, yet, in practise on large data sets will run in approximately linear time as an upper bound

there have been many threads on big-O, most including code which looks almost exactly like the one's you have posted. Big-O is not a complicated notation, it simply tries to determine a "worst case runtime" for the code. A for loop has a worst case if every element is searched, now determine the number of elements that could be searched and there is your answer.

*Coeffecients are always dropped, eg:

1) n/2 = 1/2 * n = n -> O(n), linean time

2) 2n = 2 * n = n -> O(n), still linear

3) 5 = 5 * 1 = 1 -> O(1), constant time

it is interesting to note that even though an algorithm in n/2 and 2n appear to have large differences in runtime, yet, in practise on large data sets will run in approximately linear time as an upper bound

Page 1 of 1