New Xeon Computer

So doubling the CUDA resulted in a saving of 2 hours…?

 

Yes or the new version is faster

going to run a small base line project that I have to see what the difference is

 

Philip, just an idea, are you choosing the Merge Tiles into One File options for the point cloud and the DSM? If so, might be worth some trials with the tiles separated.

Pix4d have acknowledged there’s an issue with v3 Xeons so we know it’s a coding issue.

Ran smaller 300 photo project with the new version of Pix4d and the dual GTX 970 SLi configuration, no change in time, 1 hour 45 min

Interesting Austin…there doesn’t seem to be any correlation between data size and time.

I have ran this project on a i7-5930k overclocked and it took 2 1/2 hours

Appreciate all the research everyone is doing here, so thought I would jump in with some more stats.

We built an expensive dual socket Xeon rig and have been disappointed with the processing times. So I did some basic benchmarking simply running the default Pix4D demo project on the default settings on a variety of machines all with version 3.0.13.

My desktop blows away the dual Xeon’s which seem to be having a struggle during the Stage 3 Ortho processing: 

Obviously, results would likely differ once we have more realist project sizes with hundreds of photos but I was amazed at such a marked difference on the demo project.

Graham, this is great data. I’m going to run the demo set on my little homebrew computer when I’m back in the office. The one thing that still isn’t clear for me though is the graphics card’s influence on the results. Xeons by nature seem to often associate with Quadros in workstations! This is valuable testing you’ve done, but I wonder how it would go if it was just processors head to head.

Appreciate your post.

Steve

This thread has been very helpful for us in making our decision for using the PC configurations for Pix4d. I would like to share a bit our experience.

We currently have an i7 5820k, 96GB RAM, 500GB M2 SSD, GTX970 system which has been our work horse for a few months. We have tried to process upto 10000 images on this machine with not much success (it hangs in between and takes forever to process). We have processed upto 5000 images (from Sony A6000) successfully with great success (after hanging for a couple of times during Step 2).

Now we are looking at build our second machine with better performance which can undertake the better work loads and looking at the above discussions we are sticking to i7 systems and are looking to use i7 5960x, 128GB RAM and GTX980Ti. Hopefully this will be our best bet of Money spent Vs the Performance that we can get out of this. Would like to hear what people here feel about this?

Also just a thought here, Can we consolidate all the projects and our experienced based on the hardware here in a single excel sheet in anyway? This can perhaps be a great resource to whoever looking to work on their next projects. I am willing to volunteer to prepare a format if people are interested in sharing their response to the same.

Parvin,

I run dual xeon 2670 with 128gb ram gtx 1070 and a pcie ssd with great results running 5000+ images almost daily. I am looking into building a Ryzen pc in the next few weeks to see how strong they are in pix4d. I also have a 6700k i7 build which does preform faster in step 1 but after that the dual xeon kick in and cut the time in at least half in step 2.

Has anyone found any definitive information on high core count Xeon performance with Pix4D (14 cores per CPU or higher)?  My questions are the same as the original poster, in trying to find information on anyone who has actually used something like the E5-2698 or E5-2699 processors.  In the V4 variants of these processors we are talking 20 and 22 physical cores for each processor, respectively.  This is potentially up to 88 processing threads in a dual core, hyper-threaded setup.  There are trade offs though when going this high in number of cores, one being peak processor speed while all cores are running.  The 2699 and 2698 processors, while having more cores, run at a lower peak speed with all cores running.  Sweet spot may be an E5-2697A V4 or E5-2690 V4, which have 16 and 14 physical cores each and still maintain turbo speeds of 3.1 and 3.2 Ghz with all cores running.

See the following link for an interesting discussion on this subject (it is for Agisoft, not P4D, but still should be in the ballpark):

https://www.pugetsystems.com/labs/articles/Agisoft-PhotoScan-GPU-Acceleration-710/

Puget has a number of interesting discussion pages on their website about processing speed vs hardware used (related again to Agisoft).

We have budget for a fairly high dollar system and are looking at dual high core count xeons, 512gb of RAM, M.2 OS Drive, and a quad video card setup, but I can find no practical evidence for potential benefits with pix4D regarding the really high end processors.  One of our issues is that we are processing jobs with 36MP images (Sony A7R or Nikon D810) with high numbers of photos, and processing times are very high in some cases (days rather than hours).  We currently have a single processor 8 core Xeon system (E5-2667 V2, I think, runs all cores at 3.6 Ghz) with a GTX-1080 and 128gb of RAM and it is not enough for the bigger jobs we need to do.  We are using Pix4D, but we also have Agisoft and Trimble Photogrammetry software as well, and they all seem to utilize computer hardware resources differently (one example is that Trimble does not seem to benefit from the GPU the same as the other two do.) 

 

 

 

Hi all, just found this interesting topic since I also want to build PC for image processing with P4D. Usually I process 300 images only with my laptop (i5) but will process up to 2,000 images for my project.

So between this machine, Z440 (E5-1620v3,RAM upgrade to 64GB, Quadro K620) and hp envy phoenix (i7 6700, RAM Upgrade to 64GB, with GTX 1070), which one is better? I read that Xeon v3 have problem with P4D. 

Need your suggestion please. 

Hi All,

We we ran set of controlled files through various hosts.  We identified the hp dl380 2 x 20 core xeon, 1 x nvidia quadro m6000 24gb, 1 x k80, 768gb ram, ssd array took 10x longer than a 4 core i7 desktop.

The server was running windows server 2016 or 2012 in terminal server mode.

We have identified how to fix this on a server os. This is what we need to change, but this is not an option via the gui on a server OS

https://support.pix4d.com/hc/en-us/articles/218195063-Long-Processing-Time-for-Step-3-with-Quadro-GPUs#gsc.tab=0

The option to change the 3d profile on a server os does not exist in the nvidia app

I was having a look into the desktop vs server version of the driver (files) to try and work out if I could somehow get the desktop tool to run in the server OS.

While comparing the nvidia installed  files I came across a directory with a heap of powershell scripts in it. Of particular interest was the file called “Manage3dProfiles.ps1”

 

 

 

I searched the web for anyone who had used this tool, two returns neither were of any use.

I had a look through the script and found at the bottom they have some what looks to be examples. I edited with what I thought we may have needed and ran it on my desktop.  Verified in the nvidia control panel it changed the 3dmode , switched it back to default in the nvidia control panel and re-ran the script. Again it changed to the correct profile.

I then took the script to the server host and ran it, it reported it had updated the settings (although it could not be confirmed in a gui app).

I re-ran our sample data and the results are amazingly different. See the results attached.  1:14:36 to now 0:11:06.

So it would seem even though the gui option does not exist, the settings can be changed via powershell.

This is the section I altered:

Default file:

Changed to:

 

 

 

 

 

Ran it on the server and this is the result, changed ot the correct mode:

 

We were able to reduce the processing times for 20 control files from 1 h 14 mins to 11 minutes

In green:

Hope this helps.

 

 

 

 

 

 

 

 

 

 

 

Well done! So all we need now is for Pix4D to validate this and provide an updated Xeon optimized version of the app.

How do we get them to do this?

I know this is an old subject but it seems that the majority of issues (slowness) when using a Xeon machine with Quadro relates to the Quadro driver settings. Selecting 3D Game Development in the Nvidia control panel seems to solve the issue. We didn’t know that when this thread started!

However - I am now considering purchasing an i9 Skylake machine and I am not sure if its worth investing in Quadro or Geforce. I read that Quadro cards are almost identical to GeForce, but its the different drivers that make the difference.

Anyone have any updates on this since this subject started?

1 Like

Our most recent system was built with Xeon processors but with Geforce video cards (1080Ti’s).  Pix4D does not appear to benefit from double precision calcs that the Quadro cards offer, but benefits most from high CUDA core count, for which the Geforce series cards are much more economical.

I have just finished building: How should I bench mark test this against your results gentlemen?

  • AMD Ryzen 1950x Thread Ripper
  • Msi MOBO Gaming Pro X399
  • MSI Geforce 1080Ti
  • 64 ram
  • Liquid cooled
  • Samsung 850 ssd sata

 

Troy, I would suggest to run the projects from the Pix4D Computer hardware community benchmark post and share your results there.