3d forums home Resources for 3d artists

nvidia GPU in SLI is it worth it?

Posted: February 07, 2010
evo
Hello Everyone, i'm new here so i hope my question isnt too stupid. Im trying to build myself a workstation pc to run Rhino3D and render with various other software. Its never going to be used for gaming. I've got the chance to buy two NVIDIA Quadro FX4500x2 (PNY) 1GB GPU's. With the correct motherboard, possibly an Asus P6T iX58 and in intel i7 CPU, would it be worth buying them both and running them in SLI. Would I get any benefits from it. I'm a bit in the dark with the whole workstation thing, but what Ive read is its worth going that route. Thought by asking you guys you may have first hand experience.

Hope you can help me.

Cheers Evo
Posted: April 06, 2010
PixelOz
Well I recently acquired an i7 920 PC wit two Evga 260 GTX graphics cards oparating in SLI mode and I can tell you that in some 3D programs that do support it it does make a difference.

I know that your cards are more like workstation centric and mine are more like game centric but that doesn't mean that I cannot use my computer as a workstation if I want to with a fair degree of stability it is just that graphics cards like the Quadros are kinda better optimized for CAD applications.

I know from my game experience that SLI does make a difference. I have tried several game demos like Crysis and Batman Arkam Asylum and I can tell you that at least in gaming you can tell the difference in performance clearly when you turn SLI on or off. I know that I'm using games as a comparisong but that can be a good one cause games are usually very graphics intensive and very demanding on your PC hardware overall so they are not a bad test for PC performance.

Crysis in particular is usually regarded as a very demanding game and I heard many complains of it needing too much hardware power to run properly, in my new i7 PC I never noticed, it just runs great with those two graphic cards and it can still run well with a single one but in SLI mode it really shines. But still I'm not sure how exactly will that translate to your mentioned workstation graphic cards but SLI may make a difference.

What I can tell you is that an i7 CPU is no slouch and that's the opinion of many people around the web as I have investigated. I have a MSI - Motherboard, Model X58 Platinum SLI and before that I had a Pentium 4 - 3 Ghz Hyperthreading PC and the new PC leaves it eating dust.

In rendering your CAD models the 4 core CPU will make a huge difference specially cause the i7 is hyperthreading so it can run 8 threads concurrently and the performance is sort of like that of a 6 core CPU without hyperthreading so the hyperthreading of your 4 core CPU is going to be significative at the moment of rendering cause most mayor rendering engines now in the market and even those in open source software has support for multithreading and that CPU will accelerate your renderings considerably.

If you are considering a renderer like Hypershot for Rhinos (which is already a darned fast renderer even with CPU power alone) then it may be good for you to find out if those graphic cards have support for CUDA cause Hypershot can now use CUDA together with the available CPU power to render even faster than it already did so if your cards do not support CUDA computing and you are considering adding Hypershot to your Rhino you may actually be better off by buying a couple of the newer high end gaming cards like those from Evga (like the GTX 480 or 470) that just came into the market using the new chip from Nvidia because that is going to make a huge difference in rendering with Hypershot.

A couple of those new 480s in SLI mode coupled with an i7 could make Hypershot go really, really fast.
Posted: April 06, 2010
PixelOz
I went to the Nvidia site and I took a look at the list of graphic cards that have CUDA support but the 4500 X2 is not listed there, the 4600 and the 4700 X2 are listed as CUDA capable but the 4500 X2 is not so if you are going to use Hypershot in the future you might want to take that into account.
Posted: May 11, 2010
Sickmind
Note that you need a more powerfull psu.
Posted: May 12, 2010
PixelOz
I have to add here from what I learned recently that SLI or Crossfire may not be necessary to perform CUDA or Stream computing respectively.

What I mean is that a GPU based renderer could take advantage of multiple GPUs in order to compute rendering be that throgh CUDA, Stream, Direct Compute or Open CL. What the renderers do is to use the additional GPUs available through the bus (the PCI Express bus).

For example the free Luxrender renderer will be able to use in version 0.8 (due at the end of this year) all CPU cores, all GPU units in a single PC or in multiple networked PCs.

So if for example you have two networked PCs each with dual quadra core CPUs (a total of 16 CPU cores) with two Nvidia GTX 295 graphic cards on each (The 295 has two GPU units on each card for a total of 8 GPU units) then Luxrender will be able to use all of that (the 16 CPU cores and the 8 GPU units) to render a single frame with the use of Open CL.

Luxrender will do this without SLI or Crossfire because the Luxrender people told me in their forum that SLI or Crossfire support in Open CL is pretty broken at the moment but Luxrender will be able to use all the available GPUs for rendering through the bus.

As for SLI or Crossfire this is more important for real-time type 3D graphics of the kind that 3D games use or the kind that 3D programs use while editing. In the case of Quadro type (workstation type) graphic cards they provide additional capabilities (a few) like anti-aliasing of the Viewports etc. and for this stuff and for higher real-time performance I think that SLI or Crossfire does make a difference but a lot of the time that depends in the software, games or otherwise.

The bottom line is that probably other renderers that use any type of available GPU computing will be able to use that power without the use of SLI or Crossfire and the important point here is that multiple graphic cards could still make a difference even if they cannot be used through SLI or Crossfire cause for GPU computing it may not be necessary and SLI or Crossfire support in Open Cl or other types of GPU computing could also be improved in the future but anyway they may be used for such a purpose even without that.

Now I have to clear that not all programs that can do GPU computing are able to use all the available GPU units in a PC, some can only use only one. For example Adobe premiere CS5 can use some CUDA based cards to accelerate video performance a lot in Premiere (the list of supported ones that can accelerate the Mercury engine in Premiere is kinda limited at the moment) but at the moment it can only use only one GPU for that and it ignores the others.

This I believe is because GPU computing is at the moment beginning to gain momentum but is not fully mature yet. As time goes by chances are that more and more programs will be able to take advantage of multiple GPUs in PCs for different types of computations and that does not necessary means that it has to be with the help of SLI or Crossfire as I have mentioned.

For some renderers for example the use of GPU computing can become closer to real-time but this is not really, really real-time 3D graphics no matter what the illusion of things like Hypershot (recently renamed Shot) and other GPU renderers may lead you to believe due to their interactivity but for true real-time 3D graphics it can be necessary and it can make a substantial difference in performance for that, again it will depend on the software.

Here you can see the again the issue that software is still trying to catch up to parallel processing. CPUs can no longer expand too much at the moment in terms of Ghz speed but they are expanding in power trough parallel computing. As for GPUs they are still expanding in both clock speed and parallelism but I presume that their expansion in terms of clock speed will reach a ceiling soon just as it happened to CPUs and then they will have to continue to expand in parallelism too but for software this is a big transition that will take time.

As time goes by you are seeing more and more programs that take more and more advantage of parallel processing in both the CPUs and the GPUs and I hope that soon most software that really need it (and the number of software that could take advantage of that could be much greater than people think at the moment) will take advantage of that new-found power but the transition will still take some time.

This is a multiple transition that software is going through at the moment. Software is in a transition toward the use of 64 bit computing at the moment and that is nowhere near finished. If you look at your Program files directories in Vista 64 bit for example (in Vista 64 bit you have two, one for 32 bit applications and one for 64 bit applications) you will notice that 95 percent of the software is installed in the 32 bit Program Files directory and that alone shows you that the transition toward 64 bit computing still have a long journey ahead.

I have Illustrator CS4 and my PC is a Intel i7 with 12 gigabytes of memory and if I try to export and raster (to convert an Illustrator file into a bitmap format of one type or the the other) and I try to raster it at very high resolution (like 1200 Dpi or many times far less than that) Illustrator runs out of memory quickly depending on the complexity of the vector artwork and that is because it cannot use all the memory that I have cause it is still a 32 bit program.

Photoshop is already much more 64 bit but many other programs in the Adobe line are still mostly 32 bit and this is one of the biggest software companies in the world.

So then we have a transition toward multiple CPU cores computing and most software programs are still trying to catch up with that and now parallel GPU computing enters the scene and that's is another new direction that software has to go and all that is quite a bit so programmers have their work quite cut out for them in the immediate future.

So we are mostly waiting for software to catch up with all these multiple hardware transitions that just happened at the same time and this is quite a big hurdle that software has to overcome particularly the move toward parallel computing but it will get there little by little and I'm afraid that we will have to be patient and wait until we are able to tap into all that new added hardware power. Smile
Posted: June 09, 2010
showboat66
yes if your software can utilize the extra power, it makes a huge difference