Understanding the Integration of LiDAR Data in Pix4Dcloud's "depth-generation" Algorithm

Hello,

I am currently conducting a study on the use of Pix4Dcatch/Cloud in combination with a viDoc RTK rover and an iPhone 14 Pro for a Swiss geomatics company.

However, while working with Pix4Dcloud, I came across a question I would like to understand more precisely.

My question concerns the integration of LiDAR data captured during acquisition (via ARKit and the associated depth/confidence maps) during post-processing.

After reviewing the processing log (.log), I noticed that the “depth-generation” algorithm is used to generate additional points that enrich the previously densified point cloud. What I don’t fully understand is how this algorithm works:

  • Does it combine photogrammetric data with LiDAR-derived depth information to reconstruct these new points?
  • Or are only the depth maps used during this step?
  • How are the depth-generated points merged with the dense point cloud?

I am also aware that in Pix4Dmatic, it is possible to generate a depth point cloud. Is the technique used in Pix4Dcloud similar to the one implemented in Pix4Dmatic?

Thank you in advance to anyone who can provide some insight into this process.

Best regards,
QB