I’m new to using this software, I used an IPad Pro 2020 to capture the space using PIX4DCatch and imported the files to Cloud/Mapper. However, PIX4DCatch seems to show more node points and 3D texture than when uploaded to Cloud/Mapper. I imported the files in Blender and the textures/model looked fine.
This is currently a test run before going on-site tomorrow to conduct a full internal building survey.
First is from the iPad before export, then cloud and mapper:
Is there something I’m doing wrong when uploading to either Cloud and/or Mapper?
Welcome to the community!
The projects displayed on the device include the available LiDAR data. This available LiDAR data is not processed when uploading to PIX4Dcloud when the Use classic image processing is selected. Next time use the default processing to include LiDAR data. PIX4Dmatic would be another option to process LiDAR data on the desktop.
No LiDAR data is processed when Use classic image processing
To explain the differences in images presented. LiDAR data is helpful in areas where traditional photogrammetry struggles(blank, low texture surfaces), and can provide additional density to those regions. The following article can help illustrate the differences: Depth and dense fusion