Support Website Contact Support Blog


I am using pix4dmatic to process a project having a total of 26,000 pictures (45MP)
My hardware configuration is
Platform : win 10 pro
CPU Intel (R) core™ i7-1070, 4.7GHz.
GPU: GeForce GTX 1650 Super/PCIe/SSE
Vendor: Nvidia Corporation
I have successfully run calibration for 2 d 8h and 2s
In densify, the program run smoothly for the first three days reaching 90% completion. But sadly, the program start to proceed at a rate of 1% per day.
On reviewing the log file, I found the message, “densify finished”
New other events were initiated and these are
“Create point cloud”
“Shuffle points”
“Write point cloud”
“Open point cloud”
At the moment, shuffle point is only 63%, while the project % completion raised by 2% on the past two days.
Resource utilization has dropped greatly for the CPU which is roughly at 1% but 99% for the RAM. In task manager, I can see only 93MB are used for the RAM but the percentage of RAM utilization is 99%.
No any other process running apart from Pix4dmatic and task manager.
In the whole list of running app I found no app that has more than 1MB for ram utilization.
What might have contributed to this sudden drop in performance?
Also, how I’m I going to resolve the issue?!

Hi @ogebra, thank you for providing all the information and adding the screenshot.

The most probable cause of why the processing is so slow is the lack of RAM. You mention that you are using 32 GB of RAM while we recommend at least 128 GB RAM for projects of this size.

Since there is not enough space in RAM, the system could start to use the disk as extra memory. This is called caching and could explain why the processing is tremendously slower. At this point we cannot guarantee that the processing will eventually finish and would recommend you to increase the hardware resources or process smaller projects.

I hope this helps,

This information might be helpful but i suggest

  1. In future versions, try to add something like a resource monitor, that will advice you on optimal processing option to use base on your hardware configuration
  2. Add options for merging some projects after calibration and densification so that we can at least save sometimes in creating a larger orthomosaic and dsm
    Anyway, we will proceed with processing and share the feedback with you wether it successfully finish the job, and how much time elapsed to tun the process, lets hope nothing like crush happen


Thank you for informing us of this issue. We have added an improvement to make the step that creates the point cloud less resource intensive, this will be in an upcoming version. That way we hope that in the future you won’t encounter the same issue.

As for the two other suggestions, we do plan to have a way to configure the hardware that the software should use for your processing, which can be combined with the information of the hardware recommendations shared above. The optimal processing options will generally depend on the type of project and on the accuracy you need to deliver. That combined with the minimal hardware recommendations should in theory be enough to setup the project. That said, you’re right that the current hardware recommendations are based on “default” processing options and we could make additional tests to cover e.g. “High” density settings, etc…We’re also thinking about how to best implement the merging of projects.

How did the processing go in the end, did you manage to finish the project?
Looking forward to your feedback.