3d forums home Resources for 3d artists

GPU vs CPU for rendering

Posted: April 23, 2009
CG forums guest
I have a few questions about GPU's Vs CPU's.


I am a user of Blender and I am fairly new to 3D.
My understanding is that GPU's are more powerful that CPU's in many ways. Especially when it comes to Graphics.

So why is it that the CPU of the computer does the actual rendering and the GPU is only used for the model creation process? Doesn't that waste the power of the GPU?

Now with Blender being open source it should be possible to shift the pipeline to the GPU, correct? Or is that just too much work, or not possible?

Because from my point of view I should be able to take the new Nvidia Quado series video cards with their CUDA interface and with some changes to Blender use the Quadro to render the image. Would that not work?

Say for instance I had a system that took advantage of 2 Nvidia Quadro 2200 S4 series video cards connected with SLI. That would give me 1920 processor cores and around 12 gigs of video ram. Would that not be enough power to render the final image? Instead of using a set of RenderBoxx 10300 clustered together?

It really seems like a waste of money for me to buy a 20-30 thousand dollar 3D graphics machine with that much video power and then have to buy a rendering farm for another 20-100 thousand dollars.

So could someone please explain to me the reason that the cpu must do that work. Because it seems to me that the CPU micro-code controllers and code processing pipelines are not really built for that kind of work.

So why not use the GPU for all of the work? or use the cpu to handle to overflow?
Call me dumb, but that just makes more sense to me.

Thanks
Posted: April 23, 2009
jrtroberts
I thought I had logged in when I wrote this post, but obviously not. thanks if you can help explain this to me
Posted: April 24, 2009
3d artist gallery Andyba
I think that GPUs are too specialised and have a limited number of shaders and it limits the number of ways they can calculate the final image. That's why the processor is doing all the rendering. But I may be wrong.
I know there are renderers that use the GPU to render fast but I don't know what is the quality of those renders.
Posted: April 26, 2009
jrtroberts
Andyba wrote:
I think that GPUs are too specialised and have a limited number of shaders and it limits the number of ways they can calculate the final image. That's why the processor is doing all the rendering. But I may be wrong.
I know there are renderers that use the GPU to render fast but I don't know what is the quality of those renders.


After many phone calls to MIT and Southern Polytechnic university I discovered that yes video cards are designed to handle specific arrays of numbers, such as the vertex coordinates and other geometry of a model, but since raytracing and other final render techniques are too dynamic the main cpu is used for the rendering.

Thanks.
Posted: June 03, 2009
3d artist gallery Andyba
May be in the future the GPUs will be used for rendering... Wink
But right now people are using huge render farms.
Silly people... Why would they do it? Very Happy
Posted: June 05, 2009
CG forums guest
As anydba wrote it is possible to render with GPU on special render engines.

Here is an example of gpu rendering in Autodesk Maya with the FurryBall renderer.


Pretty impressive, eh?
Posted: December 15, 2009
Tinlau
Ooh, this renderer might be very useful to me. I found another example where it calculates ambient occlusion which is really amazing.

Posted: January 17, 2010
AlteredTach
Here's the deal. A GPU is very powerful for processing graphics information, but when you are rendering something or modeling something, you are actually running a program that is then creating the models and such. So a GPU rendering models would be the same thing as your GPU running firefox, or windows, or internet explorer. It comes down tot he way the program is written as to how rendering will be carried out. multi-processor programming is very difficult, so I wouldn't count on the GPU's being used to render in any great deal anytime soon.

Just as you said, GPU's deal with streams of very specific information.
Posted: January 22, 2010
3d artist gallery Andyba
About rendering with GPU:
Here is a very interesting video showing some new technology renderer that uses the GPU and compares it to some GI Vray renders. The anti aliasing is not that good but the global illumination is pretty amazing.

Posted: January 29, 2010
UVlas
Wow these are pretty damn fast GI renders.
It is strange that the guy in the video tells that the quality is identical to vray while on the video itself I can see that it is not.
Posted: January 29, 2010
andrewbell
I cannot see the videos ... can you post links?
Posted: April 07, 2010
PixelOz
Ok I'll try to explain to you what is the deal with that. GPUs have gone through a long evolution since the early days in which they started to appear on PCs. Originally as some of you remember graphics cards didn't even process 3D graphics at all, what they did was to compute raster images or basically pixels.

Actually at the beginning they were not even called graphic cards because all they drew on the display were characters and later on they started to draw pixels too but this early pixel pushers were mainly used only to drive the display as 2D graphic cards and they did not have the capability to accelerate 3D graphics like drawing polygons and textures and the like at all.

Because of this, early 3D software had to do their computations entirely on the CPU. Early games and 3D programs like Flight Simulator and others did all the 3D computations on the CPU and the graphic card merely drew the final pixels on the display in essence. The very early 3D programs that appeared on the scene were all CPU driven and that not only included the 3D models that you see on the display as you create and edit them but also the CPU was the one responsible for the final renderings too.

When graphics chips started to evolve they started to acquire 3D acceleration capabilities in addition to the 2D pixel drawing ones but they were more specialized in real-time 3D graphics which are the kind of graphics that 3D games use. As you may know real-time 3D graphics and photo-realistic renderings are two different things, real-time 3D graphics compromise quality for the sake of the enormous speed that re-drawing the 3D image several times per second requires and pre-rendered graphics such as those created by raytracing and other modern rendering methods focus more on the quality in order to create graphics that are far more realistic than real-time graphics but this requires much more time to compute. As a reference a single frame of a very complex scene at a very high resolution like those used in Hollywood movies can take up to several days to compute and in a movie you need 24 frames per second of animation so it can take considerably much more time, for simpler 3D scenes it can take anywhere from a few seconds to several hours to render a frame.

To a large degree real-time graphics were propelled forward due to the enormous boom that the game industry gave it. The desire for higher and higher quality graphics in 3D games in a multi-billion dollar industry helped to push forward the evolution and quality of real-time graphics tremendously but also these type of graphics were used to accelerate other type of 3D programs such as 3D CAD and 3D modeling programs and others. These type of real-time graphics in which early GPUs specialized were only good to drive the 3D models that you were creating or editing but those GPUs neglected almost totally pre-rendered graphics such as those done in raytracing and the like and because of that rendering remained for a long time being done on CPUs instead. So earlier GPUs specialized mostly on real-time graphics.

As time went by these graphics GPUs continued to evolve and evolve in capabilities and real-time graphics became better and better and much more complex and these cards started to acquire a few other capabilities such as accelerating the display of video. As the complexity and programability of GPUs grew it came to a point that their manufactures realized that these processors could perform much more than merely real-time graphics and video by modifying them and giving them some additional capabilities and that's were the concept of GPU computing got into the scene.

GPU computing acceleration like CUDA, OPEN CL (Open Computing Language), etc. is barely starting to appear on the scene but basically it means that GPU manufacturers have started to add to their processors additional capabilities that allow them to perform other types of computations that normally were reserved for the CPU and that includes things like raytracing and other type of rendering methods that were performed before exclusively on CPUs mostly.

To your concern of why this has not been performed by graphic cards before I can said that I had the idea that graphic cards should have helped with things like raytracing for a long time but it's likely that the factor that prevented them from acquiring those capabilities at an earlier date was an economic one cause back there it was more economically feasible to add real-time capabilities to them and improve upon that. Nevertheless GPU computing is starting to appear on the scene now and some software programs are already starting to take advantage of that like Hypershot. Hypershot from Bunkspeed as many people know already is a newer very, very fast CPU based renderer but now it has added CUDA acceleration capability too so it can render using all the available CPU and GPU power combined.

If you get a graphic card (or more like a multi graphic card setup in SLI mode) that has CUDA capability as many graphic cards from Nvidia do a program like Hypershot will render much, much faster than it already did with the CPU power alone. The job of using the GPUs for rendering with Blender has already started albeit externally.

The people that created the Luxrender unbiased renderer (an unbias renderer is basically one that progressively improves the quality of the images as you give it more and more time to compute) that can be used with Blender are already experimenting with GPU acceleration and the results of that are already very promising cause it can accelerate its renderings considerably. The issue is that this is still in early stages but is coming along well. Blender's own internal renderer is still not capable of using GPU acceleration but that doesn't mean that it won't acquire such a capability in the future and is likely that many software companies will add GPU acceleration to their renderers in the near future. Also it is likely that many 3D open source programs will acquire the capability too but that will take some time cause like I said this is still in its infancy but at least is already starting to happen.

The Refractive Software people have just released version v1.0 beta2 of their commercial unbiased renderer Octane (about 99 euros) which has support for Blender. This renderer has GPU support, it has support for the new CUDA 3.0. So you see that this one is already in beta2 version so the final version will be done soon.

The issue is not that it can't be done, the issue is merely that this is relatively new and that it will take some time so all the available 3D software is adapted to be able to render with GPUs or with CPUs and GPUs combined as Hypershot does. The issue is that reprogramming all those renderers to take advantage of GPU computing is something that will take some time. Smile
Posted: April 07, 2010
PixelOz
To Uvlas:

What happens is that he is changing the scene constantly and the scene requires several seconds to reach a quality that is similar to the original Vray scene. Every time he changes the angle of the scene the rendering starts again to refine the image but if he leaves the angle in a static position in just a few seconds the rendering will get a quality level that is similar to the Vray scene that took 2 minutes to render before.

You see that he changes the scene angle constantly and every time he changes the scene angle the rendering becomes grainy and starts to refine again. If he leaves the scene without changing the angle at all still for let's say 20 seconds you could get a quality level similar to the Vray scene and that's a difference from about 20 to 30 seconds compared to 2 minutes so the acceleration gain is considerable.

Also notice that he is saying that it is on early stages but that the end result after refining the software will be even better. When the code is optimized the result could be an even faster rendering with better quality yet.

Notice also that his renderer is some sort of unbiased renderer which quality improves as more and more time is given to the computation and this is different from the type of renderer that VRay is which is a biased renderer that has a definitive (finite) time frame to finish the computation, in this case 2 minutes so we are comparing here two different rendering methods.

The issue is that before using GPU acceleration an unbiased rendering took a whole lot longer to render an image of similar quality to that of a biased rendering. An unbias rendering of good quality normally can take hours or even days to compute but with GPU acceleration as he shows is accelerated to seconds instead and that is a huge performance gain, in this case big enough to get in about 20 to 30 seconds the quality that would have taken the Vray rendering about two minutes to achieve.

Now, if the Vray people take their renderer and adapt it to GPU computing as they probably will in the near future it may compute that 2 minutes scene in a fraction of the time that it took, it just happens that in this example two very dissimilar rendering methods are being compared but the example still shows how big are the performance gains that can be obtained with GPU acceleration.
Posted: July 21, 2010
bset
PixelOz made a pretty solid summary of the history etc. For those who don't understand, or feel the need to argue for some reason *shrug*. Rendering via the GPU is indeed always vastly superior in performance for a number of reasons. The simplest answer though is that it is a highly parallel floating point processor, and rendering is a series of these types of calculations with a good deal of thread-level parallelism (fancy computron speak for algorithms that have completely independent input data to produce a given output calculation).

The cost of using this amazing hardware is the software interface. For the most part, you really only have two options, DirectX and OpenGL. There are some additional libraries like CUDA, but they are hardware specific and thus have less value. Furthermore, DirectX is microsoft specific, making it less attractive as well. So you're almost surely stuck with OpenGL. This wouldn't be a problem if libraries like DirectX and OpenGL were designed for general floating point operations, but they are not. To get custom linear operators (basically the steps in a render algorithm) to execute, you have to hack them in. Thus it requires a very different way of thinking and programming than most people are used to.

Fortunately, the payoff for living in this nasty hacked-together world of render operations is well worth it, but many programmers don't know how to think of libraries like OpenGL as a general computation platform, and others are simply (dare I say it?) too lazy to bother.

Now as things always tend to go, the software guys probably won't be the ones who need to change. Since somewhere around the 90s, the capabilities of hardware began skyrocketing requiring less and less code efficiency, until we end up in a situation where software is wasting huge amounts of hardware potential (such as in CPU rendering). However, every time some sort of bottleneck is reached, it is the hardware that adapts (at least that has been the case for the past two decades).

If you notice, recently CPUs have begun to grow more parallel. More and more dense multi-core processors are being released. You may think that your new quad or 8-core machine is pretty neat, but a comparably priced graphics card has hundreds of parallel units. In fact, it is likely the role of graphics card and CPU will begin to blur in the near future, as CPUs get to tens and hundreds of cores and begin to include better floating point capabilities. At some point, we'll need only a CPU or a GPU and the difference between them will be non-obvious, or the role of one will change significantly.

For the people who are doing GPU rendering now, that's awesome. I always appreciate good software design. For those who aren't, the may just need to wait it out.
Posted: October 19, 2011
Scrible
jrtroberts, it seems like you haven't heard of blenders awesome new renderer "Cycles"?!
Since I'm not a registered member, I cant post any links. Just google blender cycles, get a build from graphicall"dot"org and enjoy!
Posted: October 23, 2011
PixelOz
Maybe he has and maybe he hasn't cause remember that his post is already several years old and Cycles didn't exist back then.
Posted: March 30, 2012
songjuyu
cannot open the video, what a pity
Posted: February 08, 2013
3d artist gallery Andyba
PixelOz wrote:
Maybe he has and maybe he hasn't cause remember that his post is already several years old and Cycles didn't exist back then.


Even today I can't use Cycles with GPU because I have a Radeon 6970 and Cycles does not support the bloody thing. Sad
Posted: May 18, 2013
collegeGuySETH467
Hi guys I have a tutor at college who went through this with us and he explained that rendering is done on the cpu as to achieve the high end visual effects advanced simulation software is used and this software runs more efficiently on a cpu. A gpu is designed for hardware acceleration it renders images directly on the hardware meaning that it can spit out images in real time faster that a cpu as it has been designed and optimized to do so but these images are not to the same quality of a cpu render.

This quality cannot be achieve by rendering on hardware as advanced software needs to be used to simulate and calculate all the factors. It would be uneccesarily difficult to design hardware that could do this (and if you could it would render faster than a cpu) but you can upgrade software by writing a new version hardware would have to be constantly changed with each upgrade. Rendering on the cpu is also more practical due to this issue