I am trying to process data from two different flights over the same area of vegetation to get a dense point cloud and orthomosaic. Both flights used the same drone (Mavic 3M), one flight was in transects and the other flight was in transects perpendicular to the first (creating a double grid pattern). Both flights had about 85% front and side overlap. I captured both RGB and multispec images but for now am just trying to process the RGB images. The flights were a couple of hours apart which is where I think things have gone wrong.
Originally I processed the two flights from start to finish completely separately and everything looked good - nice outputs and the accuracy (assessed using some validation points) looked normal - approx 0.8m error in the x and y axis and 1.5m for z.
But now I want to combine the data from the two flights - I first tried to process all the images in one project. After the Initial Processing step there was a huge 38% discrepancy in the ‘relative difference between initial and optimized internal camera parameters’. I checked a few of the validation points and the x and y errors were equivalent to the individually processed flights but the z had a huge error (around 20m). I did not continue to the point cloud and ortho stages.
I then redid the above step but selected ‘All Prior’ in the advanced settings for the internal camera paramaters. That improved things a bit - relative difference between initial and optimized internal camera parameters reduced to 16% and the z accuracy was better at around 8m. But these values were still indicating something was wrong.
I then went through the process described here to merge projects - How to merge projects - PIX4Dmapper. For each flight, I did the Initial Processing separately, created and marked five MTPs (made sure they had the same name and were the same object between the two projects), reoptimised each project. I then merged the two projects, added the validation points (and reoptimised), and created the dense point cloud and orthomosaic. Everything seemed to work much better - there was a 2.75% relative difference between initial and optimized internal camera parameter and x, y, and z errors were all around 0.5m. But then I reviewed the orthomsoaic and saw that there was ghosting:
Any further suggestions on how I can get a better output? Should the MTPs have the exact same coordinates for both initial projects?