Forcing Images to be Calibrated?

How do I force an image to be calibrated after I’ve set multiple MTPs on it and the other images it’s connected to and the features are clearly visible on both and yet it is still discarded?

E.g. I have some overheads of a building, and a few oblique overheads and some ground shots. The overheads all join together just fine, but the sides of the building are fairly sparsely populated., But I can make out the windows as small rectangles.

In the obliques the windows are more visible, as are many points on the roof and in the ground shots the same windows and many points on the fronts of the roof are also visible in at least 4 images.

I marked up all the windows corners, 8 of them on one side, 2 windows.

I marked up multiple points on the roof edge above the windows as well as the eaves of the roof and several other significant points. Ditto the ground shots.

So, there are 22 mtps, with at least 10 common to any pair of images and it should have all it needs to orient the images in 3D space fairly easily.

But, Pix4D just ignores all but the roof shots. I’ve tried high quality, low quality, 2 points per match,3,4,5, half, quarter, double sized textures, Optimise, Rematch and optimise, restart from scratch, basically every option available and it’s just not interested.

Surely, if even only 3 points are shared between two images it can calculate in 3D space where all the others are and at least look *around* the others I’ve marked and find valid matches, but while the ones from above have hundreds of orange markers, the obliques and ground ones only have about 10 red ones plus the yellow ones I added and the red ones it chooses are nowhere near the ones I’ve marked and off in some random bit of grass.

The least it should be able to do is find points *between* or *around* the ones I set at the window corners - they’re white against a red brick wall - so even a single line between the corners I put on all the images would be a start, but…nothing.

It’s just not trying at all. How do I force it to believe the MTPs are valid and the images belong together instead of randomly looking everywhere *but* where I put the MTPs, then giving up?

Are the three groups shot with the same camera and lens?

And how is the contrast and sharpness between the groups, pretty close to each other?

 

Same camera, same exposure, same time/weather/lighting. Basically after flying the aerials, flew over again with the camera tilted down slightly and grabbed a few more, then hovered in front with the camera pointing forward.

Noticed this in other places, but manages to get better matches through there being lots of photos and enough of them got selected. This was a smaller subset, but the detail is all there.

Whatever criteria Pix4D has for choosing photos, I think some sort of override where you can say “Yes, you really need to use this image, I’ve marked MTPs on it for a *very  good reason*” would be useful.

I.E, the weighting, or whatever it uses for choosing an image, should be more heavily biased in favour of photos where you apply MTPs and it should just look harder *around* those points, when it would then find *lots* of matches. The apparently random nature of its own point selection seems to fall over when the images only contain matching data in a section of the photo - such as an overhead and a ground/oblique shot that might contain sky or distant, irrelevant data, when at least 1/3 of the photo is ideal for matching with.

Are you able to process the groups independently? If so which did not process?

From my side, Using a Phantom V2+ I’ve had very good results mostly but also had a few experiences such as yours.

Nadars always process (provided data collect was proper), obliques are trickier having sometimes to strip geo data then match data sets with MTP/GCP or whatever it takes. 
We oversampled quite a bit as an example here and added constrains. ATPs where very good so luckily MTPs were not needed. 

Mesh render.

I agree with ya to the degree that ATPs are generated as an independent process from other processes. It may be a good idea to give the user options on how ATPs and MTPs interact with each in the processing chain.

I know there is little help in this post for you, but for what it’s worth.

Hope P4d staff will comment on your problem!

Dear Dave, Gary,

Sometimes, if the software cannot find matches between two subgroups of images, it will keep the largest group and discard the rest as uncalibrated. 

The suggested workflow is to create different subprojects for different flights/angles. If you have both nadir and oblique images in the same project, it is difficult for the software to tie these together. What works well is to create two subprojects (or 3 in your case) and then merge these subprojects as explained here

https://support.pix4d.com/hc/en-us/articles/202558529

When merging projects, it is important to add Manual Tie Points in each subproject that are on the same feature and that have the same name. This will tell the software that these are the same feature and that they should be matched.