Maximum CPU value

I received a question about CPU resources from the end user.

[System Configuriation]
CPU1:Xeon E5-2697v4 2.3GHz (18 core x Hyper thread =36 Logical thread)
CPU2:Xeon E5-2697v4 2.3GHz (18 core x Hyper thread =36 Logical thread) 
 Total 36 Core CPU, 72 Logical thread

RAM 256GB
GPU nVidia Quadro P5000
  Note: Setting “3D App ? Game Development”

Use Pix4D Software
 Pix4D Enterprise Workstation

[Current state]
Pix4D used only 36 CPUs. Is this limited by software?

If it is correct, please indicate the maximum value that can be assigned by Pix4D software.

 

I haven’t seen a maximum core limit from testing different options on Amazon EC2 but a Pix4D person can answer that better.

I will say that your CPU combination there is a total waste of money for Pix4D.  Granted it does get you to 256GB of RAM so that is the only advantage over a Core i9 CPU.  For the most part, Pix4D wants the highest GHz with the addition of more cores beyond 8 to 10 not providing much improvement for the HUGE additional cost.

Also, having too much RAM can cause significant accuracy and quality problems in the point cloud.  Photogrammetry is not an exact science like a laser scan so a lot of variables affect the processing…most of which I have never seen discussed in the public domain.  But if you process the same set of pictures 40-50 times like I have done, then you find out most of those variables.

2 Likes

There are some substeps of step 1 that cannot run in parallel. Therefore, for these substeps the amount of cores is less important than the speed. On the other side, for the processes that run in parallel, the amount of cores is very important and can reduce a lot processing time.

This Community post has some interesting information on the subject: https://community.pix4d.com/t/3787

Otherwise, to learn more about different processing performances, I could recommend this article as well: https://support.pix4d.com/hc/en-us/articles/115003928846 

If there are other users who have some experience to share on the subject, feel free to add your comments! Would be interested to read about it. 

1 Like

Thank you for much information. Existing customer already built this system.
This problem occurs when all cores are not used in the process step.
The customer’s question is half of the CPU resources thus not used phenomenon is known action?

Yes, this is known. Not all steps can be processed in parallel, so in certain steps not all cores are used. For steps that can be made parallel, it can be that the size and type of project does not need as many cores and that less are sufficient to process the project. 

Question for Adam Jordan, P.E.:

Can you point me to more information about your comments on ‘too much RAM can cause significant accuracy and quality problems’?  I am curious as to what you mean by this.

Thomas, you won’t find any data (that I know of) on the subject because the individual projects are so varied.  I can tell you that significant differences in point cloud results can be attributed to how much RAM Pix4D uses in conjunction with hardware architecture differences.

I have spent a lot of time and money testing one data set for 6-9 months in the process of investigating the best 3D cell tower modeling possible…by Pix4D or any other software.  I haven’t kept track of the exact count of trials but I am sure it is up to 100 by now…all with the same set of pictures.

The main items to test are the CPU/motherboard differences (Core vs. Xeon), amount of RAM, and GPU differences (Quadro vs. GeForce).

The bottom line is my $6,000 desktop system will likely out perform, in speed and quality, a $20,000 server system.  But even my $6,000 desktop system can perform poorly on smaller projects in speed and quality depending on the RAM allocation compared to a $3,000 system.

1 Like

By RAM allocation, are you talking a software setting?

My real interest is in what actual differences you are seeing in data quality that you are attributing to ‘too much RAM’.  This is a troubling concept for me.  I am a professional land surveyor by profession, and data quality is a key issue for the work we are doing.  If there are going to be differences in data quality happening because of a system with too much RAM, this is not a good thing.  My problem is that I have worked with a range of projects from 200 12 MP images to 6000 36MP images, and I absolutely need the same level of quality on both data sets.

Can you elaborate on exactly what effects you are seeing?

You can adjust the RAM within Pix4D for your testing.  Your projects may be different than mine and the exact science I use in cell tower 3D modeling is a trade secret.  You can also lower accuracy and data quality with nearly every setting in Pix4D (or artificially increase detail when it actually lowers accuracy) so great attention has to be paid to each setting by itself and how it interacts with the other settings.  Hopefully you are taking the weeks and months of testing to validate your specific project type data integrity (surveying will be different than modeling).

I share your concern as my modeling is used for structural analysis so data quality can literally mean life or death…I am a licensed professional engineer.

1 Like

Thank for the response Adam.  We have done testing to ensure that the product meets our needs, and I do understand in general your comments about the multitude of settings affecting accuracy of results.

My question still stands though with respect to what effects or anomalies are you seeing in the resultant data from processing with too much RAM?  As a concept, this is not making sense to me, and would indicate a serious flaw in the processing algorithms that Pix4D is using.  Further, I have monitored usage by the system during my testing, and in most cases, all RAM is not even utilized by the system.  I have two systems, one with 128gb and one with 512gb of RAM, and have monitored usage during testing to see what resources are getting utilized.  In most cases with small to mid size jobs (max of 800-1000 36mp images) I have not seen more than 40 to 50gb of RAM utilized even though more is available.  The software appears to use only what it needs, so how would more RAM being available cause issues?  And again, specifically what issues are you talking about?  Not trying to be argumentative, just trying to get to the bottom of this, and I do appreciate your responses so far.

I’d also be interested to better understand what you mean Adam. RAM should not change the quality of the output. It can make your project fail if there is not enough RAM, but the RAM should not have an effect on accuracy.

Here are 3 pictures of the point cloud and report from 3 different hardware configurations (about 6 months ago) with identical pictures and settings (first is my old desktop, second is Amazon EC2, and third is Microsoft Azure).  I also have some other projects comparing just the RAM but it will take me a bit longer to dig up.  And I am running a new project today on my desktop (upgraded to a second GTX 1080Ti) as well as the Pix4D cloud to give another comparison.

EDIT - I noticed the text was hard to read so here is a link to the original screenshots: https://www.dropbox.com/sh/aiio8fl3lxwceup/AABQmVsHQq-m1yXu_88OhI46a?dl=0 

The extra background noise is the most obvious difference but it doesn’t account for the nearly 15% variation in total point cloud density when using the same processing area.  Maybe the accuracy is just fine but hardware certainly can make a difference…but my guess is that not every project will show this large a variation.

Thanks for posting, Adam.  This helps to see what you are seeing with the different systems processing the same data.  This is a bit alarming, as I would think the end results should be the same independent of processing hardware, given identical inputs and software settings.

Having said that, are you sure that it is memory that is the root cause here?  Each system is very different in several other ways as well, which could have equal impact on processing, namely the CPU and GPU subsystems are very different from machine to machine here (I7 to Xeon, Geforce to Quadro to Tesla).  Those are a lot of variables in the mix to pin this down on just quantity of memory being utilized.

You made a comment before that photogrammetry is not an exact science.  I would argue that, in the past, this comment may have held more weight with the human element involved in the mapping process (thinking old school stereoplotter equipment here).  But with these newer processes, I would think that the end results should be more repeatable, particularly coming from the same set of data as a starting point. 

I would be interested to hear from Pix4D staff as to a likely explanation for the variability of results you are showing.  One general problem for me as a professional land surveyor is that I do not have a thorough understanding of how the inner workings of the software produce the end results we are getting.  Some of this will not be known due to trade secrets issues.  So in general, I have to do thorough empirical testing to ensure that the end results are acceptable to my needs (and I have done so).  Ultimately, I have to be able to defend the work I am signing off on, and this ‘equipment based variability’ in processing really throws another wrench into the works for this subject.

1 Like

Thomas, that was to show hardware differences…I have to dig up some other test projects for just RAM differences and post those later this week.

Also running the same project a second time on the same hardware will result in different points when you zoom in.

None of this information means the results are wrong or bad…just realize the limitations in accuracy as with any traditional method.

I’d like to have a closer look at the projects before making any comment. Adam could you upload the three of them (images, results folder, .p4d, any additional data you have used,…) to a file sharing service and send them through a support request? Thanks!

Here is the new RAM comparison data: https://community.pix4d.com/t/3579-RAM-Comparison-in-Point-Cloud-Results 

1 Like

To rekindle the question, I am also curious about maximum CPU usage. I have a 64 core Threadripper 3990x and can only see 64 of the 128 usable threads. Is this a numa-node issue? Why can’t Pix see, and use, the other 64 cores? As for RAM, Pix used to max out the ram on my machine (512 of ddr3200). It doesn’t do that anymore. The most I have seen is around 30%. What has changed?

I came on here to ask this exact question. I’ve got a customer who uses Pix4D and is looking for a hardware upgrade. Is the windows default scheduler core limit of 64 per process coming into effect here. Or does Pix4D use it’s own scheduler?

Basically is it worth the customer buying a 64C/128T Thread ripper or will they only benefit from a 32C/64T? They are currently maxing out 32 cores on their existing system?

Hi @pix4d26 ,

In Pix4D we do not test the hardware. But, Puget Systems does. Check their results: AMD Threadripper 3990X: Does Windows 10 for Workstations speed up photogrammetry? and Pix4D 4.4 CPU Performance: AMD Threadripper 3990X 64 Core. I hope you will find it useful.

Regards

It’s worth the upgrade if you’re processing multiple projects. If you go to “Task Manager-details tab” then scroll down to Pix4D, right click on it, set affinity” you will be allowed to select which NUMA node you want to use (1 or 0). Setting 1 project to use node 0 and another to use node 1 will max your threads out without the projects having to share resources.