Does Pix4D Require fp64 GPU Compute Hardware?

It is well known that Nvidia’s Maxwell GeForce series GPUs have dramatically reduced fp64 performance, with the real compute workhorses being Quadro and Tesla models. However, the problem for customers building dedicated Pix4D workstations is that we don’t know if the software operates in fp64 or fp32.

I’ve been looking for a whitepaper explaining whether or not Pix4D is an fp64 application, but found nothing.

We want to build a couple of dedicated high-performance workstations and, obviously, we would like to purchase the right cards. Please can someone from Pix4D release detailed information to help us choose the right hardware. Thanks guys. We love the software!

Cheers,
Nick

Hello,

Depending on the step of processing, the operations can be in double or single precision floating point. 

You can find more information about the use of GPU here, as well as cards we recommend

https://support.pix4d.com/hc/en-us/articles/203405619-Use-of-the-GPU-in-Pix4Dmapper

Best regards,

Thank you. I read that document a couple of times and a few others related on this site, as well as hunted around the net for more specific info and community experiences. However, not much was to be found which is surprising for such a performance oriented application like Pix4D.

Because of a lack of benchmarking and detailed information on Pix4D performance across various types of fp32 and fp64 based GPU, and other hardware, we’ve had to generalise. Which is fine if your requirements are satisfied by a “good enough” approach and your budget can’t be stretched beyond consumer components anyway. In this case we would just aim for the best value proposition and use a card like an NVIDIA 980 Ti and be done.

But what if you wanted to halve your GPU compute times from a 980 Ti but needed to be sure of the gain to justify the stretch in budget? There is no way to get that information from the creators of PIX4D.

The same goes for CPU compute as well. There is no way to see the performance scale from a 5820K, to a 5960X, to a Xeon twelve-core, to a Xeon eighteen-core or dual Xeon twelve-core CPU.

So, if we want to halve our GPU compute times from a 980 Ti how far do we need to jump? Will a mid-range Quadro do it, or will we need to commit to a high-end Quadro? Or, is there a significant amount of fp64 and CUDA in Pix4D that committing to a Tesla is the only way to get that performance gain?

And, what level of data and data type would we need to be crunching at each scale point? There really isn’t any metrics available to make these decisions. Do we have to hope that Anandtech does it for Pix4D? Seriously?

If there is one thing that the tech industry loves and that is clear metrics on which to base configuration and budget decisions. Unfortunately, nothing on the Pix4D site provides this. Not even close.

If you don’t have the resources why not let the community provide the metrics? Just build the capability to capture and present this on your website. Just capture configuration details, data type and amount, times, results, targets, etc. Provide it back to your customers so that the picture becomes clear.

BTW, it is almost impossible to use the Pix4D forums on a smartphone or tablet. It would be fantastic if this was improved. There are great options for forums that are responsive and provide excellent support for community interactions.

Thanks! Pix4D is a great product.

I’m affrayed that this has been my pain also. From February to now I’ve been trying to understand high performance processing for P4D. Unfortunately or fortunately…the volume work we’ve been courting has be slow getting contracts together. 

1: Build one or two high performance workstations as in house end to end processing pipes.

2: Build a few cheaper mid range workstations in house.

3: Rent as many cores as needed from Amazon or other services per the job requirements for the heavy chunking and final processing localy.

I cant say for sure but id imagine P4D is looking at off site processing viability for future proofing and help with cost of constant processing node updating.

Thoughts?!?!?!

I would really like to know if there are significant gains using FP64 and the quadro series cards or even a Titan Z w/dual GPUs and high FP64 precision throughput? Do render times decrease with use of FP64? Do large models have less error when using a FP64 card? Also, how do we enable it since PIX4 Support in the comment earlier said ‘OR’?