1 Replies - 4822 Views - Last Post: 01 January 2013 - 11:17 AM

#1 orestis125  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 2
  • Joined: 27-December 12

GPU Slower than CPU?

Posted 27 December 2012 - 01:30 PM

Hello everyone,

I'm new to this forum so please forgive me if this is the wrong place to ask this question.

About two weeks ago I started working on a 2D particle system. I had finished it using the CPU only and at around 3000 particles my fps was starting to drop. At that point, I researched online and found out that it's best to use the GPU for particles. By spending a week on learning HLSL and by getting help from Microsoft's 3D particle sample I was able to finish the system using shaders.

The problem is that it's slower than the CPU in most cases. I say in most cases, because it can render up to 200,000 particles without an fps drop but only if the texture is a single pixel. If I use bigger textures (75x75 for example) the fps starts to drop at 1500 particles.

My (weak) laptop IS CPU-Bound (i5 2.7GHz processor and Geforce610M graphics card) but still; I was expecting a better performance using the GPU only. I don't believe that there is something wrong with my code since microsoft's sample is also running on an even slower frame-rate (though that's in 3D).

Is this normal? If it is, how am I going to know what to use? CPU or GPU?

Thank you very much,
I appreciate your help

Is This A Good Question/Topic? 0
  • +

Replies To: GPU Slower than CPU?

#2 lordofduct  Icon User is offline

  • I'm a cheeseburger
  • member icon


Reputation: 2538
  • View blog
  • Posts: 4,641
  • Joined: 24-September 10

Re: GPU Slower than CPU?

Posted 01 January 2013 - 11:17 AM

Let's just look at the graphics card you got there... the 610M is the lowest level 600 series mobile gpu from Nvidia.

Geforce 610M:
http://www.geforce.c.../specifications

lets compare to say the GT 610 (lowest of GT600 series):
http://www.geforce.c.../specifications

and the GTX 650 (lowest of GTX600 series):
http://www.geforce.c.../specifications

The 610M has half the max clock speed of the GT610. And the GTX of course blows it out of the water. So your GPU really... isn't the hottest thing on the block... really it's the lowest end you can go within that series.


Now to compare that to your CPU... this is where it gets complicated. Your CPU and GPU work on different architectures. They aren't apples to apples. The machine language isn't even the same since the gpu uses completely different op-codes, and it's better at processing different things. For instance the GPU far surpasses the cpu at say floating-point operations, but the cpu does better at IO operations.

Furthermore, on the higher level you write your programs for each in different higher level languages. Of which you don't have control over what machine code is being generated during compiling.

What I'm saying is that the GPU isn't necessarily alwasy going to be faster than the CPU. We just say use the GPU for graphics based stuff... because that's what the GPU is designed for. MOST GPUs will fair better for it, though of course there will always be exceptions to that rule.






So now to your problem.

You have 2 different programs getting two different speed results on two different architectures. There's hardly any comparison you can really draw from this.

Furthermore I don't know how optimized either program is. There's a good chance your GPU program is of lower efficiency due to novice code. And the MS example means nothing, their examples usually aren't optimized either... they serve solely as examples.

I'd suggest testing them on other hardware. Consider what hardware you plan to target, not what you plan to develop on. If you're targeting users who are using mostly laptops like yourself which will probably have lower end graphics (even lower than yours like say on-board video or something)... then these results tell you that you should be writing to the CPU, because your code runs faster there.

But if you plan to target desktops where your average user is using higher-end graphics cards better than your dev machine... well you need to test there, and not on your dev machine.
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1