RAM Comparison in Point Cloud Results

This started in another thread but I wanted to highlight my findings over the past 6+ months of validating accuracy in Pix4D with various hardware setups.

This is a 3D model project of a cell tower and your specific projects may have different results but hopefully this gives you some ideas on how to test and validate your work.

I took one project and ran just the point cloud creation in Step 2 with three different RAM settings within Pix4D, 128GB - 64GB - 32GB.

 

Here is 128GB:

And 64GB:

And 32GB:

 

While you can see more noise in the 128GB run, I cleaned the point cloud and it didn’t add up to the total point count difference so there is more good data, not just noise.

The cluster count is what you have to watch for and not the actual amount of RAM so I hope this example shows how everybody needs to validate their data because cluster count does affect point cloud results.

Thanks for creating a dedicated post on this and for sharing your findings.

More RAM means you can process more images in a single cluster. The fact that you have more images in a single cluster increases the probability of finding the minimum amount of matches to create a 3D point in the densification. The minimum amount of matches is set in the processing options:


I believe that the difference in the amount of points is noticeable because of the specific use case of modelling cell towers, where the images cover the same object from all sides, hence if you add more images in a cluster more points can be created. More points created also means more processing time. If you had a grid flight plan to create an orthomosaic the difference would be less, since the geometry is less complicated.

That makes sense Pierangelo. I am now working on a survey/real estate mapping and will report back my findings with that project type.

1 Like

Looking forward to seeing your findings. Really interesting work you’re doing!

Thanks for your continued efforts, Adam.

In the above series of projects, was the RAM limited physically or by the software settings in Pix4D?

Tom

Nevermind - I did not read your whole post, looks like it was software settings. 

Looks like I have some additional experiments to run of my own.

Thanks again!

Tom

Here are the results of changing the RAM via the Pix4D settings on a simple grid, mapping mission…flown with Pix4Dcapture now that it works with my M600 and Sony camera.  You can see the point cloud difference by looking at the walls on the house and this result is basically the same as my cell tower that used orbits only.

64GB with 121.23 million points:

32GB with 111.31 million points:

16GB with 105.93 million points:

Adam, am I right in seeing that the 32gb Ram iteration was 6 hours faster than 128gb iteration?

Erik, that is correct but more importantly the point cloud is 9 million points fewer…no amount of time savings is worth bad results. Also keep in mind that good or bad results can be judged differently from project to project and customer to customer.

There is significant time and quality difference on the grid project too.

Yeah, I totally get that, it’s constantly a balancing act between speed and quality for me.

Still in the learning curve on that one. The 1% gain in density per m3 vs. 250% time is interesting.

I appreciate everyone sharing there data here, it’s really helpful.

You can’t compare density because I didn’t set a processing area…just look at total point count.

A processing area can also greatly affect the quality of the point cloud…anybody like me doing high accuracy, engineering level work absolutely must do similar tests or bad data can result.

Interesting!

If we follow the same reasoning as above, it makes sense that there are more points in places such as the walls of the building, as these are less visible from a nadir perspective. Hence, when there is more RAM and less clusters (with more images) it is more likely to find the minimum number of matches for the densification in these places. If the points are indeed created in such areas, I would argue that they have a small impact on the DSM and orthomosaic as these typically use the highest points of the point cloud, which should already be there. The cell tower use case seems to benefit more from that RAM difference, as they create additional points that can be used for inspection. What do you think Adam?

So, does it look like 64gb would be a good trade off between speed and quality?

Matthew,

That is impossible to answer because it varies widely from project type and size. I buy the max RAM possible and work my settings to produce the best deliverable. I wish it was an easy answer but there is a reason why it takes an expert to produce the best results…there is no Easy Button.

Ah, that’s right. I can tell the program to only use a certain amount of RAM. I’d forgotten about that. It does seem counterintuitive that more RAM would lead to more processing time, but I think I understand why, now that I have to consider it.