14 Replies - 1258 Views - Last Post: 03 October 2013 - 11:42 AM Rate Topic: -----

#1 streek405  Icon User is offline

  • D.I.C Addict

Reputation: 10
  • View blog
  • Posts: 537
  • Joined: 10-March 13

Do people really use the "Big O" when programming?

Posted 29 September 2013 - 10:35 PM

I was just wondering if you guys really base your codes on the "Big O". My professor brought it up a few weeks ago and stated that we should always base our programs on the ones with the smallest "Big O" value, like O(1) or O(log n). In theory, it makes sense to me, but to always base your program on it does not really seem to be realistic for me...but that's probably because I'm still a noob.

Thanks.

This post has been edited by streek405: 29 September 2013 - 10:37 PM

Is This A Good Question/Topic? 0
  • +

Replies To: Do people really use the "Big O" when programming?

#2 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10397
  • View blog
  • Posts: 38,473
  • Joined: 27-December 08

Re: Do people really use the "Big O" when programming?

Posted 29 September 2013 - 11:22 PM

A lot of the practical points of complexity theory deal with that you want to write efficient code. Why would you use an approach that takes exponential time when an n log(n) time approach would work? A lot of the reason one studies theoretical computer science is to train oneself to think about efficient and effective approaches to problem solving.
Was This Post Helpful? 2
  • +
  • -

#3 streek405  Icon User is offline

  • D.I.C Addict

Reputation: 10
  • View blog
  • Posts: 537
  • Joined: 10-March 13

Re: Do people really use the "Big O" when programming?

Posted 29 September 2013 - 11:31 PM

View Postmacosxnerd101, on 29 September 2013 - 11:22 PM, said:

A lot of the practical points of complexity theory deal with that you want to write efficient code. Why would you use an approach that takes exponential time when an n log(n) time approach would work? A lot of the reason one studies theoretical computer science is to train oneself to think about efficient and effective approaches to problem solving.

Do you always have to right it out by hand, to solve for Big O, or do you just know the complexity by now?
Was This Post Helpful? 0
  • +
  • -

#4 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10397
  • View blog
  • Posts: 38,473
  • Joined: 27-December 08

Re: Do people really use the "Big O" when programming?

Posted 30 September 2013 - 06:08 AM

Depends on the algorithm. When I write code, especially professionally, I don't work out the time complexities. For a lot of what I do in math, I do end up using it a fair bit.
Was This Post Helpful? 0
  • +
  • -

#5 baavgai  Icon User is offline

  • Dreaming Coder
  • member icon

Reputation: 5780
  • View blog
  • Posts: 12,596
  • Joined: 16-October 07

Re: Do people really use the "Big O" when programming?

Posted 30 September 2013 - 06:51 AM

There are no real world projects that can be boiled down to a single BigO. They are, of course, composed of many, many sub problems where you could do algorithm analysis on, but that's rarely were your development issue will be.

Really, it's all about bottlenecks. Understanding complexity will help once you've identified a particular bottleneck; or not. The important thing is knowing that there is a cost with everything you do and being aware of things that are inherently more costly.

As you suspect, people use BigO mostly in Computer Science and the type of problems CS likes to work on. Standard solutions to those problems are usually part of core frameworks that most programmers simply reference. Application Development is a big, messy, business that doesn't usually boil down to the easily dissected.
Was This Post Helpful? 2
  • +
  • -

#6 mostyfriedman  Icon User is offline

  • The Algorithmi
  • member icon

Reputation: 727
  • View blog
  • Posts: 4,473
  • Joined: 24-October 08

Re: Do people really use the "Big O" when programming?

Posted 30 September 2013 - 08:52 AM

One thing to add to the other replies is that the algorithm with the smallest Big-O is not ALWAYS chosen in practice, because you also have the constant factor. Sometimes that factor would be big enough that an asymptotically slower algorithm would actually perform better if the problem size is not that big, and sometimes the problem size is not expected to be huge enough to use a "faster" algorithm. For example, insertion sort is faster than merge sort for small arrays, that's why in Java and Python (probably other languages too), the sorting function in the API uses both algorithms. It looks something as follow.

sort(array):
    merge-sort(p, q, array)

merge-sort(p, q, array):
    if q-p <= k // k is some constant that is determined through experimentation
        insertion-sort(p, q, array)
    else
        r = (p+q)/2
        merge-sort(p, q, array)
        merge-sort(q+1, r, array)
        merge(p, q, r, array)


In general though, performing benchmarks is useful in practice to determine which of the candidate algorithms works best for the problem at hand.

This post has been edited by mostyfriedman: 30 September 2013 - 08:55 AM

Was This Post Helpful? 2
  • +
  • -

#7 Psyguy  Icon User is offline

  • D.I.C Regular
  • member icon

Reputation: 69
  • View blog
  • Posts: 314
  • Joined: 12-January 11

Re: Do people really use the "Big O" when programming?

Posted 30 September 2013 - 09:16 AM

Theoretical analysis vs real world problems are rarely the same thing. When writing code, you are always thinking about efficiency, but do you always have to specifically look at the Big-O notation...that is up to the developer.
Was This Post Helpful? 1
  • +
  • -

#8 streek405  Icon User is offline

  • D.I.C Addict

Reputation: 10
  • View blog
  • Posts: 537
  • Joined: 10-March 13

Re: Do people really use the "Big O" when programming?

Posted 30 September 2013 - 12:15 PM

View Postbaavgai, on 30 September 2013 - 06:51 AM, said:

Really, it's all about bottlenecks.

I've heard that term before, but what does it mean?
Was This Post Helpful? 0
  • +
  • -

#9 baavgai  Icon User is offline

  • Dreaming Coder
  • member icon

Reputation: 5780
  • View blog
  • Posts: 12,596
  • Joined: 16-October 07

Re: Do people really use the "Big O" when programming?

Posted 30 September 2013 - 12:39 PM

*
POPULAR

Imagine a bottle of liquid. If it were simply a cylinder then, when you up ended it, all the liquid would just fall out. However, most bottles have a neck, like a classic wine bottle. Up end such a bottle and the liquid still pours out, but it chokes up at the neck, glug, glug, glug.

A bottle neck is a choke point. Just as a chain is only as strong as its weakest link, a program is only as fast as it's slowest element. You could have the fasted network in the world, but if your data passes through a 10MB switch somewhere, you have a 10MB network.

Programs are tricky, but basically the bottle neck is where the program is slowing down the most. If your program spends a lot of time waiting for data from a database, then it doesn't matter that the client renders instantly, it's still waiting. Conversely, since your bottle neck is IO, and you can only whittle that down so much, what is the impact of a screen that takes an extra half second to refresh?

Games are insane for bottlenecks. Floats are traded for ints to shave off the overhead of an extra math operation. Make that kind of change in the right place and you'll see reward for your effort. Do the same thing most other places and you gain nothing but more confusing code. This is what is meant be premature optimization. You can't know what to optimize until you have a working program where bottlenecks can be identified.
Was This Post Helpful? 6
  • +
  • -

#10 Curtis Rutland  Icon User is online

  • (╯□)╯︵ (~ .o.)~
  • member icon


Reputation: 4437
  • View blog
  • Posts: 7,718
  • Joined: 08-June 10

Re: Do people really use the "Big O" when programming?

Posted 30 September 2013 - 02:10 PM

One thing I've found is that professors don't always have a good understanding of real-world programming. So you hear a lot of things in class that may or may not actually be true once you hit the job market. That's an important one there: "always base [y]our programs on the ones with the smallest "Big O" value" is a great example of potential premature optimization. Real-world concerns include considerations like programmer time and requirements. If you're writing a business application that will be run once a day to read through a text file of 200 lines, then efficiency typically isn't a big concern. It doesn't matter if you're using quicksort or one of the O(n2) algorithms, because 40,000 iterations isn't all that big, or rather O(n log n) isn't that much faster than O(n2) where n is a small enough number and modern computers that can process millions of instructions a second are involved.

Of course, in these cases, you wouldn't be using the inefficient algorithm, you'd be using whatever your framework provided, because smart people have already written these algorithms and built them into your framework, and they know your time is better spent on the unique logic of your program, not the common logic of sorting algorithms.

On the other hand, having an understanding of Big-O is very important, because it enables you to choose both the simplest and most efficient way the first time. If I already know that one built-in algorithm is more efficient at a particular task than another, I'm more likely to use that particular one initially, as long as it doesn't add significant complexity to my program.

Then there's cases where optimization is important. Especially if you're writing low-level code, but it can matter in pretty much any application that deals with a big enough bottleneck. If that text file we mentioned earlier was hundreds of megabytes large, you might need to work on optimizing your operations. That's when knowing your Big-O is really important.

I guess what it boils down to is: "it's important to know so that you can make good choices during your design phase, but to base every decision on efficiency is wasteful of your time and effort, because in 80% of the cases modern PCs are powerful enough to make the inefficiencies unimportant."
Was This Post Helpful? 4
  • +
  • -

#11 streek405  Icon User is offline

  • D.I.C Addict

Reputation: 10
  • View blog
  • Posts: 537
  • Joined: 10-March 13

Re: Do people really use the "Big O" when programming?

Posted 30 September 2013 - 03:37 PM

View Postbaavgai, on 30 September 2013 - 12:39 PM, said:


Wow thanks! That was a great explaination :D
Was This Post Helpful? 0
  • +
  • -

#12 jon.kiparsky  Icon User is online

  • Pancakes!
  • member icon


Reputation: 7578
  • View blog
  • Posts: 12,743
  • Joined: 19-March 11

Re: Do people really use the "Big O" when programming?

Posted 02 October 2013 - 08:57 PM

Quote

Do people really use the "Big O" when programming?


Depends on the people, and what they're doing. Some don't and sometimes it doesn't bite them in the ass. Sometimes it does.

My most frequent use case for Big-O in daily work is simply in knowing the different data structures and their access times. It really helps if you know the characteristics of a given structure without thinking about them - and also, when it's going to matter and when convenience can trump efficiency.

And generally, the is a real implicit benefit to studying big-O and algorithm design: you really start to have an instinct for what will work well, and you know when you need to think carefully about stuff like big-O, and you know what you need to look for. Most of the time, you won't need to think about these things, but when you do, it's good to have that "spidey-sense" that starts tingling as you get near a trouble spot.

So even if you never have to do the math directly, a solid grounding in the analysis of asymptotic running time will probably make you a better programmer in the long run.
Was This Post Helpful? 1
  • +
  • -

#13 streek405  Icon User is offline

  • D.I.C Addict

Reputation: 10
  • View blog
  • Posts: 537
  • Joined: 10-March 13

Re: Do people really use the "Big O" when programming?

Posted 02 October 2013 - 11:29 PM

View Postjon.kiparsky, on 02 October 2013 - 08:57 PM, said:



Ok. I'm guess going to have to remembering certain structures and kind of base it off that.
Was This Post Helpful? 0
  • +
  • -

#14 cfoley  Icon User is offline

  • Cabbage
  • member icon

Reputation: 1940
  • View blog
  • Posts: 4,029
  • Joined: 11-December 07

Re: Do people really use the "Big O" when programming?

Posted 03 October 2013 - 01:54 AM

In no particular order, the kind of things I consider (in a hand-wavey, imprecise way) when programming are:

- How fast it runs
- How often it will be run
- How long it takes me to code
- How easy it is to understand and debug
- How easy it is to extend
- How much memory it takes

I usually do three main types of programming. When I'm making desktop applications, readability and maintainability are usually the most important. When I'm making a library to crunch lots of data, big O also becomes important. When I'm writing throwaway scripts just to get an answer out of the computer, how long it takes me to code is all that matters.

Be careful of the trap in that last statement. I've become unstuck more than once when poorly-written throwaway code becomes permanent.
Was This Post Helpful? 3
  • +
  • -

#15 streek405  Icon User is offline

  • D.I.C Addict

Reputation: 10
  • View blog
  • Posts: 537
  • Joined: 10-March 13

Re: Do people really use the "Big O" when programming?

Posted 03 October 2013 - 11:42 AM

View Postcfoley, on 03 October 2013 - 01:54 AM, said:

In no particular order, the kind of things I consider (in a hand-wavey, imprecise way) when programming are:

- How fast it runs
- How often it will be run
- How long it takes me to code
- How easy it is to understand and debug
- How easy it is to extend
- How much memory it takes

I usually do three main types of programming. When I'm making desktop applications, readability and maintainability are usually the most important. When I'm making a library to crunch lots of data, big O also becomes important. When I'm writing throwaway scripts just to get an answer out of the computer, how long it takes me to code is all that matters.

Be careful of the trap in that last statement. I've become unstuck more than once when poorly-written throwaway code becomes permanent.


That actually seems like pretty useful way to base your program off of :)
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1