Have you ever compared an indoor 3D pointcloud resulting from Pix4D mapper given images as input to a 3D pointcloud resulting from a depth camera such as the Intel Realsense D435? Which one would be better?
We have never done that comparison.
I am not an expert at depth cameras but I believe the error increases a lot with the distance to the object so it would depend on many factors but the distance to the object is probably the main one.
If you ever do the comparison, please let us know as we would be very interested.
Thank you very much.
Indeed that’s an interesting fact that the Intel Realsense D435 has an RMS error increasing a lot in function of the distance. From that perspective it seems that using Pix4D mapper could lead to better accuracy for distance greater than ~7m.
The Intel Realsense T265 can give the pose in realtime. Given non-geotagged images as input to Pix4D mapper, is it possible to get the pose of each image (position & orientation) after the processing of the images? This would allow not to have the T265 onboard.
Pix4D can use images without geolocation and they will be computed.
Of course, if there is no geolocation nor Ground Control Points (GCPs), the final result will be given in an arbitrary coordinate system with no scale.
If you add GCPs, the project will be georeferenced too.
I was thinking that the geotagging (Position, Orientation) was helping the intrinsic 3D reconstruction (remove some local distortion, etc). But as you said, it only helps on extrinsic aspects (scaling and geolocation).
If you have geotagging, the software will make use of them and they will help in the process.
What I wanted to point out is that even with no geotags, the software can work.