Support Website Contact Support Blog

How to reduce processing time without losing quality of cloud points

Good Morning!

I would like help with processing: the quality of the cloud of points? (ie DSM with poor quality, little advanced in .las format)

 

Thank you!

Well every project is different but there are some general guidelines:

Simply have the best desktop gaming computer you can afford ($4,000+) and run 1 project dozens of times with different Pix4D settings.  Or perhaps even simpler, hire my company to do the processing for you as I already have both hardware and software optimized for the highest quality possible.

Hardware and software I already have as per PIX4d specification for large-scale processing, what I need is really to see where I can gain in processing time.

CPU: CPU Intel ® Xeon ® E3-1240 v5 a 3.50GHz
RAM: 32GB
GPU : NVIDIA Quadro M2000 (Driver: 23.21.13.8816)

 

That is a poor price per performance hardware setup…I can’t say much more than the other threads around here that have tried and tried to make “server” setups work.  Yes it can work but at always a higher cost so there is zero benefit in my opinion…all the testing I have seen is that Pix4D is highly customized to gaming desktops.

Personally that Xeon CPU might be okay since you have already overpaid for hardware but only use a GTX1000 series video card with lots of system and video RAM.

My hardware setup can run, others have the same setup too, up to 210 gigapixel projects in far faster times with up to 33% better point clouds than a setup similar to yours.  That is pretty large scale at 5,000 pictures at 42MP each.

Dealing with the same issue as well. Here’s the problem I face. Optimal parameters for point cloud densification results in a massive file. Last quarter section project i did with over 3000 images = 60GB point cloud… how can you bring that into any other program, or how can you generate outputs that clients can view. I know my initial point cloud is accurate from quality report. GCP’s and check points are survey grade. 

My engine is I-9 7980x processor, 64GB RAM, Geforce titanx so processing is possible at full density, but is it worth it… not really. Accuracy is done and proven in step 1. So if you process step 2 and 3 with lower density this will not effect the accuracy of your original “Tie Point” point cloud from step 1. Point cloud from step 2 as i understand it, as meant to “fill in” the tie points from step 1.

Save time by running point cloud at 1/2 or 1/4 scale, low density point generation and see how much faster it is… this process is very new to me and still in the trial stages for me so I will evaluate and get back to you on the process. Figured i’d share my workflow since I too am buried in data at the moment.

Thanks for the important tips John Grant… let’s talk and we’ll sure do better a bit!

John,

Quality means many things and while I haven’t tested your theory, it makes perfect sense that accuracy is set in Step 1. Quality can also mean noise and detail, which is where the settings in Step 2 are key.

Not many customers want to ever see a point cloud so how to handle such massive amounts of data are dependent on deliverables. Raw point cloud manipulation is best in Leica Cyclone with typical export into AutoCAD or Revit with JetStream and Cloudworx in the engineering world. The survey side never needs such detail so point clouds size shouldn’t be an issue but again surveyors typically want other deliverables besides the point cloud.

At the same time you have to have a solid plan to handle TBs of data from the field to the customer. Pix4D Cloud can help with customers viewing results but you likely still have to transfer TBs of data yourself.

1 Like

Adam,

I agree noise is also an issue for deliverables and accuracy. I believe step 3 DSM generation helps with noise filtering and smoothing of the raw point cloud in step 2. Interested to know your thoughts. I’m also interested in your suggestion of Leica Cyclone, i’ll look into that one. No doubt Pix4D’s engine is powerful enough to create vivid detail but by-product of that is very large data sizes. That’s why I suggest full process in step 1 with 1/2 or 1/4 with low density in step 2 and 3 would help in this area. We’re still talking 1000 points per m3, as apposed to 13,000 to 16,000 per m3. Please evaluate and tell me your thoughts. Maybe Cleiton can also evaluate this process and see if he can find a balance between detail/accuracy and data file sizes.

F.Y.I I capture data at 30-35M 70/70, 80/80 overlap so my photos are not short of data, so pulling back on my point cloud is not hurting my models details. Also haven’t seen too much noise and my GCP’s/checkpoints are usually within’ 5mm-50mm accuracy.

Agreed you STILL have to move around a lot of big files. I guess that’s the cost of doing business. Haven’t figured out a way around that.