Support Website Contact Support Blog

Can Pix4d use GPUs to process photos?

I was curious if Pix4D has the capability to use GPUs to assist with the processing functions?

In other words if I put together a computer that had 6 GPUs (basically 6 graphics cards) could Pix4D use those GPUs to process the photos?

If so, does it matter which type of video card I get? (NVidia / AMD)

 

Thank you,

 

Jason

Hey Jason,

Pix4D uses the GPU primarily during Step 1, however, it can only leverage one GPU per project. The GPU would need to be capable of OpenGL 3.2 or greater for Pix4D to recognize it.

Different graphics cards have benchmarked different speeds when it comes to Pix4D with the Nvidia 1080 TI coming out as the fastest currently available GPU for processing.

Yours Sincerely,

Tim

Hi Tim,

 

So the point cloud processing does not use the GPU?

Why the need for a powerful GPU? As I have noticed, the 1st step(initial processing) is usually pretty quick, I just upgraded to a better GPU to get better point cloud processing time, is this not the case? can you explain which processes the GPU is needed? Sorry for the in depth question! Thanks, love this program!

Lucas

Hi Lucas, 

Pix4D uses the GPU primarily during Step 1 and partially/low during step 2. To learn about it, you can go through our support article on Hardware components usage when processing with Pix4D. This article will explain in details about the processing steps and the GPU usage.

hi thank you for your answer, from my test it appears that I can do the processing of Step 1 and 2 without a GPU at all. It seems like the only need for a GPU is for step 3 and for displaying the point cloud and 3D mesh. can you explain how my CPU is able to process these first two steps without the GPU?

hi sorry oh, it actually appears that the GPU is only really needed for displaying the cloud and mesh once everything has been processed. Correct?

Hi Lucas,

I’ll add more information to what we colleagues wrote here before.  

I was curious if Pix4D has the capability to use GPUs to assist with the processing functions?

As Timothy said, Pix4Dmapper is compatible with any GPU that is compatible with  OpenGL 3.2 or above. This means that Pix4Dmapper should work with low performances Intel integrated graphics card HD 4000 and above. However, for faster processing, Pix4Dmapper also uses the processing power of GPUs that are compatible with  Nvidia CUDA 9.1 and above  (with the latest drivers installed). This allows increasing the performance in Pix4Dmapper especially during step 1 and with large projects. That is why we recommend using a GPU that is compatible with CUDA. Any Nvidia GPU: GTX, Mobile, Tesla, Titan or Quadro, that is compatible with the CUDA version 9.1 and with the latest drivers will be used during the processing. We recommend using GeForce GPU because they are usually cheaper with regards to the performances in comparison with other graphics cards such as Quadro for example. As Timothy mentioned, Nvidia 1080 Ti is suitable for processing purposes. 

In other words, if I put together a computer that had 6 GPUs (basically 6 graphics cards) could Pix4D use those GPUs to process the photos?

NVIDIA developed SLI. The Scalable Link Interface is a brand name for a multi-GPU technology for linking two or more video cards together to produce a single output. SLI is an algorithm of parallel processing for computer graphics, meant to increase the processing power available for graphics.) SLI is compatible but will not have an impact, activating or not activating SLI, the results are the same more or less. Using SLI is possible. However, using for example 2 graphics cards connected the with SLI has the following effect:

For Processing: 
It will make no change in terms of processing. Let’s say that you have 2 cards and each of them has 2GB of RAM. Using the SLI connections does not mean that you will have 1 card with 4GB of RAM, it will still be 2 GB of RAM. So, practically, it will not help the processing.

For Visualization
In the rayCloud: It could indeed help the visualisation of objects in rayCloud for big projects. If you already have a good card and you see things smoothly and nicely in the rayCloud, then maybe it will not make any difference. If you have problems with the visualization of the outputs in the rayCloud (for big projects), then using SLI (Dual Graphic card) technology could help.

Another option, where you would actually get benefits from having two cards for processing would be to have the two cards but not connect them using SLI. This allows you to more or less always have a free GPU (one that is not connected to the screen) and avoid the RAM duplication done when using SLI. There is not a special condition for using SLI technology. If the PC has such technology, the software will detect it automatically.

From our practice, just one good GPU will be enough even with big projects. Read more in Computer requirements.

If so, does it matter which type of video card I get? (NVIDIA / AMD)

We don’t support OpenCL. Therefore, you won’t be able to use the AMD card for computation as you can do with NVIDIA.

I hope we covered all your questions. If not please contact us again.

Best!

Hey,

Thanks a lot for the in-depth description, I really appreciate it, BUT my main question has still not been answered, I will ask a more specific question. What I’m wondering if I am able to do: can I process everything on an extremely powerful computer system using an extremely modern CPU system with all of the CPU tech you recommend, there is NO GPU in the system. Willl I be able to process everything on this CPU system, for speed/power reasons, AND THEN TRANSFER all the processed files to a smaller system that DOES have a GPU in order to interact with the processed 3D data(GPU is needed) I am trying to save time essentially, and it seems the GPU only speeds things up about 10 to 20 percent, as a massive CPU system could speed things up based on how many CPUs are integrated for processing, say 50 percent faster for each CPU that is integrated in the system, which would be EXPONENTIALLY more useful than simply having a single powerful GPU, as you could(could you?) integrate say 12 CPU, for insane processing speeds, then transfer to GPU system for interaction and editing.

Hi Lucas,

I’m happy that you came back to us with more detailed questions.
I hope this time I’ll be able to answer all of them, if not please let me know.

Generally speaking, presence of GPU that we support (type and the version) speeds up the processing up to 40% from what our testers could notice. We utilise GPU not only for display purposes but also for feature extraction/match/rematch Step 1 subprocesses, independently to the number of CPU cores. In the case where no GPU was detected, all the processes are handled by CPU. So the more powerful CPU, the better. In theory, the more cores are available, the faster the processing is (but only in the calibration part). However, as we did not test it from our end, I cannot give you any scope of speed improvement at this point. So to answer your question; yes you can do it, as it sounds reasonable. Nevertheless, we have never done such a test so it would be nice to obtain your findings on how this solution works. 

Is everything clear for you now? Let me know 

and all the best! :-) 

Hi Lucas,

I got additional information from our developers which I already included in my post above. Please read it once again and let me know whether you have some questions.

Best

Hello there,

Well time to move forward! I was hoping you would have tried this already oh, it makes me wonder what type of cloud you’re using for your Cloud processing. I will be doing some tests soon and will gladly share the results with you. Thanks again and I look forward to continuing the conversation.

Hi Lucas,

Cloud… here are the specs :slight_smile:

CPU: Intel® Xeon® Platinum 8124M CPU @ 3.00GHz
RAM: 69GB
Operating system: Linux 4.15.0-1010-aws x86_64

Best

Hi Lucas,

To follow up on Beata’s response we use AWS c5.9xlarge and c5.18xlarge EC2 instances. I believe currently that most instances of Pix4D are limited to only using one CPU socket per Pix4D instance at this time. From experience with other customers, many have the used the workflow you described above: processing large datasets in a CPU optimized compute instance and then transferring to a GPU instance for visualization and editing.

So long as you follow the setup described in this article we have on using AWS you should be able to implement that workflow.