Support Website Contact Support Blog

Is LiDAR data used during processing?

I noticed that the LiDAR data is used during capture to provide augmented reality feedback, but is it used in any way during processing in Pix4Dmapper?

It seems to export depth map data and several csv files but how is this used in Pix4Dmapper?

Hello @grant1701,

The LiDAR data is only used by the Pix4Dcatch app to provide augmented reality feedback for better image acquisition. The depth map or any other files is not used while processing in Pix4Dmapper.


Then I’m confused.

  1. Why include all the extra files in the exported data if the pics are the only data that is necessary?

  2. iOS doesn’t use the LiDAR data to track the device’s position, and if it’s not used by Pix4Dmapper, then why even use LiDAR at all?


Yes i see also that lidar is used only to position the photos. Only good thing is that maybe then final processed result ia in right scale.

But really @grant1701 dont know why Pix4D dont use lidar info for processing. This is good that lidar is on board but right now pix4d just dont use it in to final result. What is also in my mind very very bad and many info going waste.
Lidar only penefit is that give fast and accurate pointcloud, but right now pix4D process all results using only pictures.

There is only what can say, there is room to progress. And if Pix4D dont do this, then someone else do it! Simple:)

Very hope that @Kapil_Khanal there come soon good news!

1 Like

I have been working on incorporating the XYZ positions CSV into revised decimal degree positioning for the images. That way, you somewhat are using the benefits of the LiDAR for processing due to the positioning data. I still don’t have an easy conversion workflow yet, but it does seem to work. My next step is to pair the Ipad Pro with a GNSS device to see if I can get a better origin point for the revised XYZ calculations.

1 Like

In a more recent reply it was stated that lidar depth is only used for processing if you upload directly from Pix4DCatch to Pix4DCloud. My question is why would we want to pay for cloud processing, when we already paid for Pix4DMapper so we could process things ourselves?? It seems like they are trying to force people to sign up for unneeded monthly fees.

1 Like

Hello @mdavis1,
The Pix4Dmapper doesn’t support the processing of depth images due to the compatibility issue. However, if you want to just process the RGB images, you can export the images from the Pix4Dcatch and process them in Pix4Dmapper. For more information, visit (iOS) How to export Pix4Dcatch projects.

Can you explain what the compatability issue is? It seems to me that a LiDAR point cloud would be easy to import into software already designed to process point clouds. I’m not a software expert tho.

1 Like

I know that at the moment the only way to process data acquired by an iPhone 12 Pro, like mine, is to upload them to “cloud” and what for processing, but why you don’t think to add in your pipeline the possibility to process data on desktop using our licenses of Pix4Dmapper? What are the “compatibility issues”? P.S.: as an owner of Apple products since 1984, I’m proud to use your software in OS X version (pix4Dsurvey and others), but I’m still waiting for a Pix4Dmapper OS X version (since November 2017 and 3.0.18 version). Kind regards

Hi Grant
Firstly, thanks a lot for your valuable time in trying Pix4Dcatch out as well as reaching out to us. I admit that it can be confusing as to why Pix4Dmapper doesn’t include the LiDAR information while processing but let me clarify. The algorithm that fuses the images and LiDAR is fundamentally new and is still at the beta stage and hence is available only on Pix4Dcloud as a first step. This algorithm will be adopted by the other Pix4D products soon. In the meantime, we look forward to your input and feedback.

I have sent you a direct message to understand better your needs.

Thanks and best regards

1 Like

It would be interesting sharing results obtained using Pix4DCatch and iPhone 12 Pro here, isn’t it?

Hello Antonio, We do have a different category to share your project with the Pix4D users. That is why, I would suggest you to post it under Pix4D Cafe.

I too am a little confused about the LiDAR integration. Under the FAQ page for Pix4Dcatch it is stated:

...The LiDAR points will compensate for the lack of 3D points over reflective and low texture surfaces.

How does this work if Pix4Dcatch only utilizes LiDAR to provide augmented reality feedback for better image acquisition?

Does LiDAR functionality have any impact after data acquisition?

Hello @gustavhf,

Pix4Dcatch utilizes LiDAR data to generate the depth information for augmented reality feedback and also while processing the images using Pix4Dcloud.

Okay! What exactly is the LiDAR data used for in the processing, and in which areas can I expect a LiDAR processed model to differ from one without LiDAR enabled?

From their responses to these questions, it seems to be pretty pointless software. The LiDAR seems to only be used for camera calibration, and only for cloud processing. For a desktop user like me, it’s useless.

There are also significant memory limitations in the amount of points even an iPad Pro can hold at one time without crashing, and iOS still uses the same augmented reality visual tracking to remember the position of previous points while you scan new ones. That means as soon as the point leaves the scanner’s field of view, it no more accurate than any other phone’s camera based AR tracking.

Hi @gustavhf
Thank you for writing to us. To answer your question:
What exactly is the LiDAR data used for in the processing?
If you use a LiDAR enabled iOS device, and choose to upload your project with the “Process with depth” enabled, the LiDAR data is used directly in two ways:

  1. To estimate a more accurate scale* of the reconstruction and
  2. To fill low texture areas that photogrammetry alone is not able to reconstruct.
    It is important to note that visible features are still needed in all the images, even if the LiDAR data is used.

in which areas can I expect a LiDAR processed model to differ from one without LiDAR enabled?
To give you an example, please check out these two projects: project processed without LiDAR and project processed with LiDAR. Notice the holes in the point cloud of the project processed without the LiDAR and see how these holes are filled out in the project processed with the LiDAR. Particularly in areas that photogrammetry is unable to create “tie points”, it takes the help of the LiDAR data to fill those areas.

The “Process with depth” option is still in beta, meaning that we already see it working well for many cases, but is still being evaluated and improved before we can guarantee the same level of quality as the other Pix4D processing options.

*Accurate scale depends on a lot of factors, but most important is that the content of the images is of good quality, (good lightning and visible feature details) and covers the scene well.