12 Replies - 7621 Views - Last Post: 26 January 2012 - 09:01 PM Rate Topic: -----

#1 Nekroze  Icon User is offline

  • D.I.C Head

Reputation: 14
  • View blog
  • Posts: 170
  • Joined: 08-May 11

Fast Python Libraries Compability?

Posted 22 January 2012 - 11:27 PM

OK. so I was talking with a friend about 3d games and how I do a lot of my coding nowadays in python and how i love the language. Somehow we got to talking about Ray-Tracing in real-time and he bet me that I could not make a decent real-time Ray-Tracer for python. I thought about it for a moment and took him up on it.

Now to be honest I probably cannot make a Ray-Tracer, I am nothing special, no genius or anything but it seemed like a cool and fun learning experience so I am going to try and do it with a few constraints to keep to the bet.

I can use whatever python modules to accomplish this (things that can increase speed) however I have to write little to no C code.

Now I have been doing some research and looking at some python modules to help accomplish this. However I have no clue if these modules can work together at all and unfortunately my desktop's motherboard died 3 days ago and until I can get the money together to get it working again I cannot install and test all of these things, all I have is my tablet. So I figure it never hurts to at least ask if anyone has had any experience with these modules and it gives me something to do other then go out of my mind without a computer.

On to the modules I have found that may be useful in bolstering the performance of this project in order to succeed in the Real-Time Ray-Tracing goal here.

Stackless Python 2.6: The green tasklets here could well come in handy and if not then I can just not use them.

DirectPython 11: I need something for drawing the scene and some directx 11 features may come in handy later.

Numpy: Obviously some of the mathematical functions from numpy will be useful however it seems many of its capabilities are mirrored or enhanced in the another module.

Psyco: For obvious reasons, this is the reason i choose python 2.6 aswell, however i am not sure of its compatibility with the other modules.

Clyther: This is one of the main performance enhancing modules I am looking at, it is basicaly an OpenCL library that removes much of the need of embedding C into the python to write the GPU kernels, thus keeping with the little to no C code in the goal. This also replace some functions of Numpy with GPU accelerated ones it seems, however I am not sure if it has all of numpy.

Cython: Now I would really rather not use this at all however if it I cannot accomplish this without it then I will use it if I must.

Now my problem is that I am not sure if I can get all of these things to work together. For example, if I write some function to be called on the GPU by Clyther will it fail when I activate psyco on the project? Or if I have to use Cython to write one of the important functions in a way closer to C will I then be able to use Clyther to run that function on the GPU?

The requirements for making the code run faster will be mitigated by my computer (once it is running again) as I have a very high performance gaming computer (and fixing it may require upgrading it even further) so it should be able to do the calculations better then most computers and so long as it runs on my computer then the goal is achieved.

As you may be able to tell at the moment I am not talking about ways to optimize the Ray-Tracing algorithms or anything like that I am just looking at the environment to run the code for the project, that is all I really can research at the moment with my computer down.

Thanks guys, any experience with any of these modules, especially in using them together, would be incredibly appreciated.
Nekroze

This post has been edited by Nekroze: 22 January 2012 - 11:35 PM


Is This A Good Question/Topic? 0
  • +

Replies To: Fast Python Libraries Compability?

#2 Motoma  Icon User is offline

  • D.I.C Addict
  • member icon

Reputation: 452
  • View blog
  • Posts: 796
  • Joined: 08-June 10

Re: Fast Python Libraries Compability?

Posted 23 January 2012 - 07:14 AM

To quote Donald Knuth, "Premature optimization is the root of all evil."

My strong suggestion is to start with as few modules as you can and work up from there. CLyther is a great start, being a python wrapper around the C compiled OpenGL, you should see decent performance.

I doubt that you'll have problems getting good numbers out of your ray tracer. Take a look a the speed of this raytracer written in Javascript!
Was This Post Helpful? 0
  • +
  • -

#3 Nekroze  Icon User is offline

  • D.I.C Head

Reputation: 14
  • View blog
  • Posts: 170
  • Joined: 08-May 11

Re: Fast Python Libraries Compability?

Posted 23 January 2012 - 02:10 PM

I have heard that qoute and i do not plan on importing everything as soon as i start i am just looking at the options i have if i need them.

Does Clyther have openGL in it aswell as openCL? i thought it was just CL, maybe i wont have to use directpython 11 then, i have not really been sure of directpython though. As much as i would like to use directx 11 im not sure of the complexity to do something simple in it over openGL, although i shouldn't need much more then access to draw a pixel, i would like to actualy end up having this ray-tracer import at least .x models and have some other info for the materials etc etc and ray-trace that as it would be more successful in the goal of a meaningful ray-tracer not just a lab experiment.

I will be starting with as little modules as possible, just directpython and clyther and numpy if i need something not in clyther. As i always wanted to make something that can harness my sizable bank of cuda cores and not much is cooler then the implementation you make yourself.

As always, thanks Motoma.
Was This Post Helpful? 1
  • +
  • -

#4 darek9576  Icon User is offline

  • D.I.C Lover

Reputation: 198
  • View blog
  • Posts: 1,682
  • Joined: 13-March 10

Re: Fast Python Libraries Compability?

Posted 23 January 2012 - 02:55 PM

Try PyCUDA, Its used for general purpose GPU programming so it fits what you have to accomplish. There are many code samples on ray tracing in CUDA C but you will just need to change it from C to Python.
Was This Post Helpful? 0
  • +
  • -

#5 Nekroze  Icon User is offline

  • D.I.C Head

Reputation: 14
  • View blog
  • Posts: 170
  • Joined: 08-May 11

Re: Fast Python Libraries Compability?

Posted 23 January 2012 - 03:01 PM

From what i understand of pyCUDA, from the few examples i have seen, you still have to write the kernels in C inside python, and my goal is not to write any C preferably. Actually also from what i have seen in examples OpenCl in python is exactly the same you have to write the kernels in C inside your python script however Clyther wraps that up and allows a python written function to become a kernel.

I would like to find some performance comparisons between OpenCL and CUDA, obviously i would prefer it in their python implementations but either way... will go looking i think.

I did look at pyCUDA but there seems to be more going on in the python OpenCL scene so that's also why i have, thus far, went that way. However i am willing to be swayed into conversion.
Was This Post Helpful? 0
  • +
  • -

#6 Motoma  Icon User is offline

  • D.I.C Addict
  • member icon

Reputation: 452
  • View blog
  • Posts: 796
  • Joined: 08-June 10

Re: Fast Python Libraries Compability?

Posted 23 January 2012 - 03:02 PM

Yes, you are right, Clyther is OpenCL, not OpenGL. I vow not to answer any more questions before I've had my coffee.
Was This Post Helpful? 1
  • +
  • -

#7 Nekroze  Icon User is offline

  • D.I.C Head

Reputation: 14
  • View blog
  • Posts: 170
  • Joined: 08-May 11

Re: Fast Python Libraries Compability?

Posted 23 January 2012 - 04:00 PM

In the battle between OpenCL and CUDA, i only have Nvidia hardware so i think maybe using CUDA might be better however i am not sure on what library to use, see I cannot just use pyCUDA for the kernels because i need to write pretty much no C code, however alot of the built in cuda math libraries could be useful.

I have had a look at copperhead, and it does seem rather perfect for a smillar implementation of what clyther can do but for cuda. that is to use a simple decorator to turn a function into a GPU run function.

While it has not been altered since 2010 it does still seem decent, although i have no clue if the project was stopped in alpha/beta or what because there does not seem to be any downloads for it.

there is a homepage for it here however no downloads for the actual library in the downloads sections and only links to its dependencies in the wiki->installation documents.

Anyone know what happened to it? or if there is another library that can use cuda through a decorator on a python method?
Was This Post Helpful? 0
  • +
  • -

#8 Nekroze  Icon User is offline

  • D.I.C Head

Reputation: 14
  • View blog
  • Posts: 170
  • Joined: 08-May 11

Re: Fast Python Libraries Compability?

Posted 23 January 2012 - 08:36 PM

oh wait I'm stupid, there is nothing under downloads cause its under source, just needed to do a hg clone...

Anyways I guess I will try them both for performance/stability if I can ever get my computer working again qq.

Will be interesting to see what works better/faster as I was also wrong in that they both seem to be being actively developed so that is good yet at the same time bad for me just wanting to use something that works.

enough ranting, if no one else can comment on the cross compatibility problems that may/will arise if I ever do use psyco + copperhead/clyther then when I get around to being able to test it myself I will write something up if I can.
Was This Post Helpful? 0
  • +
  • -

#9 Simown  Icon User is offline

  • Blue Sprat
  • member icon

Reputation: 319
  • View blog
  • Posts: 650
  • Joined: 20-May 10

Re: Fast Python Libraries Compability?

Posted 24 January 2012 - 07:59 PM

I have been looking into pyglet recently, during my minimal free time, it provides an interface to OpenGL and GLU. It could come in handy, if you haven't made a committed choice yet :)
Was This Post Helpful? 0
  • +
  • -

#10 Nekroze  Icon User is offline

  • D.I.C Head

Reputation: 14
  • View blog
  • Posts: 170
  • Joined: 08-May 11

Re: Fast Python Libraries Compability?

Posted 26 January 2012 - 02:06 AM

I have been prototyping a mockup framework for this project in stackless python 2.6 and after doing some testing i have found that on 2 computers (not the fastest computers but still) using psyco indeed increases execution time, only by a small amount but it seems conclusive...

Can anyone else replicate something like this by trying to write some stackless specific code no matter how simple and use psyco. It would be very handy to know if this is always true, im not sure why this would happen on stackless code though.

The difference may be within noise but is still higher each time i use psyco, maybe just bad luck... constantly?

This post has been edited by Nekroze: 26 January 2012 - 02:34 AM

Was This Post Helpful? 0
  • +
  • -

#11 Motoma  Icon User is offline

  • D.I.C Addict
  • member icon

Reputation: 452
  • View blog
  • Posts: 796
  • Joined: 08-June 10

Re: Fast Python Libraries Compability?

Posted 26 January 2012 - 07:19 AM

Your tests are probably misleading you. For small or short running programs, psyco may degrade your overall run time. This is because psyco has to load, interpret your program, and then build 3 different byte-compiled versions with various introspection. If the overhead to this is less than the performance gain from doing these (which is very low for short/small programs), you will see your execution time increase.
Was This Post Helpful? 0
  • +
  • -

#12 Nekroze  Icon User is offline

  • D.I.C Head

Reputation: 14
  • View blog
  • Posts: 170
  • Joined: 08-May 11

Re: Fast Python Libraries Compability?

Posted 26 January 2012 - 03:14 PM

Would that still count if i am not testing the time of execution on all of my code, before i do my main calculation function i do this
import time
start = time.time()


and then immediately after that function is called i do this
print "TIME: " + str(time.time() - start)


This is my temporary in code measurement of how long my main calculation function takes.

Without psyco, for example, it return the time as 1.1 to 1.2 and with psyco it goes to 1.2 to 1.4.

Like i said at this low a number it may be noise however it is consistently higher with psyco.

Also to make sure that the previous timing method was messing up because of lack of accuracy i also have that main calculation function in a timeit object with a setup that import psyco once and thus importing does not effect the results and it still shows the same difference.
Was This Post Helpful? 0
  • +
  • -

#13 Nekroze  Icon User is offline

  • D.I.C Head

Reputation: 14
  • View blog
  • Posts: 170
  • Joined: 08-May 11

Re: Fast Python Libraries Compability?

Posted 26 January 2012 - 09:01 PM

Alright then, time to kill off some of the ideas on things i thought would improve the performance of this project.

Now i am going to attach my current Attached File  src.zip (2.26K)
Number of downloads: 23 code, its light but has a few testing methods for this stuff that i am about to speak of so its relevant.

First off, psyco and stackless: stackless is very helpful however it seems that cannot use psyco on a stackless python installation.

Running the main.py file as is will demonstrate that doing the same task with and without psyco, the with psyco is slightly slower (note this does require a stackless python installation, preferably 2.6.5) but im not sure why.

Next thing that i am starting to find out about is numpy, I was told previously when i was making a 2d map engine that a numpy array would be faster then a 2d list comprehension if i make each tile a point in a numpy array and i can access those tiles faster because of it.

In the renderer.py there is the class renderer and in its __init__ i have two implementations laid out, a 2d list comprehension for each pixel on the mock test screen and a 2d numpy list of the same size, use of the 2d numpy list in the functions where i have to actually go through and change every item in the list manually is causes a sizeable slowdown in the render_frame() function of the renderer.

I have commented out the numpy implementation so you may comment out the current list and uncomment the numpy array to see the results for yourself.

Now dont get me wrong i am not perfect and it could be something in my implementations of these modules that have caused my bad numbers but to confirm it for me and others out there or just so you can say that you thought i was doing something silly, download the attachment and try it yourself so atleast i can learn from any fails i have made.

Thanks guys, will be continuing on this and trying to get more performance enhancers working as time goes on.
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1