Hi
I’ve been working on a series of corridor mapping projects recently that are pushing my local workstation to its absolute limits. We’re talking about 10,000+ high-res images per block, and while Step 1 usually completes without a hitch, I’m running into major stability issues once I get into the point cloud densification and orthomosaic generation stages. To try and automate some of the more repetitive manual tie point (MTP) checks and export workflows, I’ve started using deltaexector as part of my post-processing pipeline to trigger some custom Python scripts I wrote for quality control.
The problem is that during heavy CPU/GPU loads, the script environment seems to hang or lose its connection to the Pix4D instance, which ends up stalling the entire project. Has anyone else here tried using an external caller for automation during these heavy processing phases? I’m trying to figure out if the issue is a simple RAM bottleneck—I’m currently at 64GB—or if there is some kind of conflict when the software and the automation tool are both fighting for the same system resources.
I’m also seeing some weird “Computing scale pyramid” errors that only pop up when my scripts are active in the background. If you’ve managed to successfully bridge Pix4D with outside automation tools for large-scale data management, I’d love to know how you handled the process priority or if there’s a way to keep the execution environment isolated so it doesn’t crash the main mapper app. Any tips on stabilizing these long-duration runs would be a lifesaver right now.