Hello!
I have a drone survey with multiple GCPs. I have been playing around with Pix4Dmapper, and noticed that depending on the combination of GCPs that I include, the RMS error in GCPs I am getting varies pretty widely. Sometimes I include the exact same set of GCPs, and get different RMS error estimates for different processing runs.
I have attached four quality reports; all for the exact same imagery. What’s confusing to me is that:
-
the error changes depending on which GCPs are included
-
the error changes between surveys. I have multiple surveys of this site that I have stitiched separately, and it’s not the same GCP that increases the error every time. However, the errors do vary widely in every survey I’ve tried to stitch so far, depending on the combination of GCPs I include.
-
the error gets distributed between GCPs differently every time (i.e., if I get low error with all GCPs except one, when I add that GCP, the overall error increases, but it’s driven by error associated with a GCP that was already included and previously had very low error associated with it).
Someone suggested to me that this issue could have to do with inaccuracies or imprecision in my GCPs. They were collected with an emlid system, and I’m not sure how we could make them more accurate.
Do you think that this is caused by imprecision in the GCPs? I don’t think it is caused by variation in how I’m linking the GCPs to the imagery; I have tried to be very careful about that.
Secondly, if I do a run where I leave out a GCP and get a low RMS error, is that accurate? Or is that underestimating the error somehow?
Thanks!
20250130 ANM Stitch_report9.pdf (731.9 KB)
20250130 ANM Stitch_report8.pdf (733.2 KB)
20250130 ANM Stitch_report5.pdf (750.2 KB)
20250130 ANM Stitch_report6.pdf (750.1 KB)