The CPU vs GPU Showdown

The CPU vs GPU Showdown

With all the exciting new advancements in the manufacturing of GPU cards, along with the growing support for GPU rendering among major 3d DCCs, it seems relying on the CPU for offline rendering is soon to be a thing of the past.
With the addition of GPU nodes to our render farm and a little downtime in the past month, we thought we would put this to the test. 

In this article, we’ll be going over some results for tests we made on several machines across our infrastructure to see just how dramatic the improvements are in rendering with a GPU, and how these improvements scale in network rendering. From time to time we run benchmarks to check the performance of our hardware and how they hold up amidst the rapid advancement of processing units relative to the work demanded of them by our clients. We also make use of this opportunity to better understand the different use cases for our render farm and our GPU rental service. 

We’ll begin by rendering a scene we prepared on a powerful CPU and GPU, then we’ll compare the results we get with the same scene over at the farm. To run these tests we’ll be using what hardware we have from our infrastructures at both GarageFarm.NET and Xesktop.

Rendering a scene test on a powerful CPU and GPU


About GarageFarm.NET and Xesktop

GarageFarm.NET is a cloud render farm that seamlessly connects with your 3D software and fully automates the process of rendering. You can send your scene right from the interface of your application without any complex and time-consuming setups.

Xesktop provides powerful, dedicated GPU servers in the cloud at your disposal for GPU 3D rendering & processing Big Data at affordable rates.

Single Node Tests

The first test we made involved pitting our fastest node at GarageFarm.NET with 176 cores, against Xesktop’s fastest GPU server, with 40960 CUDA cores, using one of our interior scenes featuring assets from 3DBee.IT

The application of both machines is comparable in terms of their use on the customer end in as much as our 176 core node is dedicated to special cases that are heavily impacted by scene loading time, and serves as a single machine from which a range of frames can be rendered continuously, as is the case with the Xesktop servers. 

The use of this node is ideal for purely rendering high memory scenes that our regular nodes may not handle, while the GPU server is the ideal choice for similar scenes where turnarounds are expected, or where there isn’t as much rush to complete the render, as quick optimizations and adjustments can be made directly from within the open project, but at the cost of time spent configuring the server.

Using an interior animation consisting of 490 frames, we began this first test rendering only every 40th frame.

 First test rendering only every 40th frame


The results clearly favored our GPU server over our node. 

GPU server test results



The GPU server rendered approximately 46 minutes faster, making it the better choice in terms of time. In terms of price, the cost on the CPU node amounted to 5.75 USD, while on the GPU server which is rented at 8 USD an hour, the rendering alone would amount to 2.26 USD.

The use of the 4x E5 4669 node is for scenes with extremely high memory consumption, however, and in other cases, each frame would be rendering on its own node simultaneously, ultimately making the farm a faster albeit less customizable solution.

Render Farm Node Tests

The next test involved pitting GarageFarm’s CPU nodes against the GPU nodes on the same setup but at full range. 

Render farm CPU node test
CPU nodes


Render farm GPU node test
GPU nodes

All 490 frames were rendered on the CPU nodes at an average time per frame of 15 mins 59 secs, while on the GPU nodes each frame rendered at an average time of 4 mins 56 seconds

We use average time per frame to rule out factors affecting the elapsed time of the jobs outside of hardware capability, such as the availability of nodes at the time of the tests. 

Summary

Test results summary


The total cost of the job on CPU nodes amounted to 147.45 USD, and more than half the amount was saved on the GPU nodes with a total cost of 61.45 USD.

It’s clear that GPU rendering wins the day in typical rendering scenarios on both single machines and over a render farm, but CPU rendering may still be as essential to a 3d pipeline as it has always been for a few key reasons:

  1. Larger projects may require more RAM than GPU cards can handle. 
  2. Certain features incorporated into a scene may not work as well on a GPU.
  3. The constant need for keeping GPU drivers up to date is an additional overhead to deal with.
  4. It is less difficult to set up a network rendering infrastructure with CPU-only nodes.

Of course, with the rate at which GPU cards are improving, it’s possible that the issues we experience now will soon be addressed. 

In any case, we’re eager to see what the future holds for 3d rendering, and how CPU and GPU computing will be leveraged in the succeeding releases of 3d suites and Render Engines.

Further reading:

We recently hosted a webinar explaining the uses, advantages, and disadvantages of third party rendering services, and what we at GarageFarm.NET do to provide as seamless a solution to your needs as possible.

We also release weekly podcasts, where some of our team discusses the latest developments in CG, improving as a 3d artist, and everything in between. Listen to the conversation on CPU vs GPU rendering

Chat