Processing fails with images from stock android smart phone camera app

Hi I am trying to create a simple point cloud of a mug as a test case. I took around 60 plus images with my smart phone. imported them to pix4dmatic.
I made sure all the images has gps tags.
note: it gave me a warning: “image orientation is missing” but the orientation of the image is present.

when I went to processing, no matter what template I select and start the processing, it always fails with “Failed to calibrate cameras: no calibrated cameras”

sample image attached for reference:

Hi @baji.shaik, the use case looks simple, but it is actually really tricky. The reason is that the content of your model has essentially no texture and is all white. Photogrammetry works by finding unique “keypoints” in images and “matching” them across images. This means it needs to find something that looks different from each other in the image. If it’s all white, it gets tricky.

If you really want to create a model of this mug, I would add some texture artificially, e.g. by placing the mug on some newspaper journal. As the mug itself is still white, you might want to use a smartphone with images and LiDAR (PIX4Dcatch workflow), so that it can be reconstructed.

Otherwise, you can model something else too if it is simply about trying out PIX4Dmatic.^^’

Hi @Pierangelo_Rothenbuhler , thanks for the response. I just tried with another object with black background an indoor plant of 63 images sample. it still fails at the “calibration sequence” saying “Processing error: failed to calibrate cameras: No calibrated cameras”

note: I Cann’t use pix4dcatch at the moment, I am trying to make a point cloud generation proof of concept with normal android camera. thanks and let me know :slight_smile:

Hi @baji.shaik

Could you please share your dataset (images, project folder,…)? e.g. with a WeTransfer link, or by sharing a Google Drive folder. I’d like to have a closer look.

That said, a black background is not necessarily better because it is homogeneous too. Although, here you seem to have some more texture on the plant itself. Seeing the data should help.

Thanks!

Hello @Pierangelo_Rothenbuhler this is a different set of images. these are only few images. but should give you an idea.

note: I also tried to generate a point cloud with pix4dcatch today, I was successfully able to generate one. I looked at the exif tags of images captured with pix4dcatch. I can see these have more tags filled for the images, can that be the problem with my non pix4dcatch images? if so can you tell me what are the tags required to successfully generate a cloud? I will try to embed those. thanks

Hi baji.shaik,

The thing that sticks out to me most are the blurry images. These will certainly cause problems in constructing a 3D model. I do not believe the information in the EXIF tags are defining success here. Good quality image content with sufficient overlap will be the best thing for this test. I see a foot here and there too. While not a deal breaker, it always is a good idea not to introduce moving objects from one frame to another as this can cause issues as well.

Regards,
-Jon