Has anyone ever tried employing high performance cluster computing in order to improve processing time of imagery? If so was it simple to set up and reliable? Thanks
I have just spent the month of May trying nearly every Amazon and Microsoft cloud server combination, with a Pix4D trial license (running on Windows Server requires the Enterprise version, not Desktop), and I found no scenario that could match my desktop. I processed about 40 billion pixels so it was no small job…bigger jobs may require more RAM to be efficient.
CPU: Intel® Core™ i7-5960X CPU @ 3.00GHz
GPU: NVIDIA GeForce GTX 1060 6GB (Driver: 220.127.116.1105)
I can go deeper into the iterations if you want to discuss via email, Adam.Jordan@nhiae.com