GCPs - auto marking and adjustment

Hi, I have a few quick questions which someone may be able to answer please:

We use a relatively large number of coordinated 3D GCPs on survey projects. Typically we mark these manually on the images - do people make much use of the automarking tool? How does this work, does it try to pattern-match pixels based on the first manual marking, or is it less sophisticated than that? We have tried it out a few times but never been completely satisifed with the marking it generates. How many images do people usually mark?

When the GCPs have been marked and the project is reoptimised, how is the adjustment performed? Is it a simple least-squares adjustment to all the GCP coordinates given - in other words, the whole exisiting model is best mean fitted to the control… or does the exisitng model get draped over the control in some more complicated way? I am wondering about the effect of a poor-quality control point right at the edge of the model and whether that one point can skew the model in that area only.

Thank you to anyone with any info or comments…

Hi Jon,

Welcome to Pix4D Community!

How does this work, does it try to pattern-match pixels based on the first manual marking, or is it less sophisticated than that?

By clicking on Automatic Marking, PIX4Dmapper will automatically mark the 3D point in the images that have not been marked.

Pix4Dmapper will search for automatic color correlation of the clicked pixel on the rest of the images. Thereby, the position of the GCPs will be optimized in more than the clicked images if the color correlation is good. The images that have a green and a yellow cross are taken into account during processing.

When the GCPs have been marked and the project is reoptimised, how is the adjustment performed? Is it a simple least-squares adjustment to all the GCP coordinates given - in other words, the whole exisiting model is best mean fitted to the control… or does the exisitng model get draped over the control in some more complicated way?

Those matching points as well as approximate values of the image position and orientation provided by the UAV autopilot are used in a bundle block adjustment, e.g. (B. Triggs and P. McLauchlan and R. Hartley and A. Fitzgibbon, 2000) and (R. Hartley and A. Zisserman, 2000), to reconstruct the exact position and orientation of the camera for every acquired image (Tang, L. and Heipke, C., 1996). For more information, please refer to Scientific papers about UAV and drone mapping

I hope this information was useful for you.

Have a very nice day!.

Best regards.