trouble processing oblique photos of warehouse roofs

I am trying out Pix4D as a tool to map out features on large flat warehouse roofs but having trouble getting the process to work, and even when I do the resulting point cloud is too fuzzy to be accurate enough. My process has been as follows: Place approx 6 stickers on various surfaces and locate them with RTK GPS to serve as GCPs. For photos, I try to get 80% overlap and walk down one side of the roof, always shooting ahead with the camera tilted so that I get a bit of image beyond the roof (either trees, buildings close by, or sometimes far away). When I reach the corner, I increase overlap as I turn and then head back on a parallel line. That process repeats until I have covered the whole roof. To fill in details, I also walk down a line but take photos perpendicular to my travel direction. In some cases I will also do a loop at the end of a line.

For one roof (85x45 metres), I took a total of 157 images with a Panasonic DMC-FZ1000 at 5472x3648 pixel resolution in JPG format. The camera was mounted on a telescopic pole that gave it a height of about 4 metres for all pics.
Iniitial processing took only 6m26s with 155 of 157 images calibrated. GCPs fit very well with mean error of 35 mm. Point cloud densification however had lots of noise and gaps. Processing time would be 4-6 hrs. My last effort I set the min matches to 2, image scale to 1 (original image size, Slow) and it took just under 4 hrs to process. My hardware is a laptop: i7-3630QM with 20 GB RAM, 1T SSD, and GTX670MX.

For the second roof (63x30 metres), I took all photos at head height but a total of 974 photos by setting the camera to record a pic every second as I walked around the roof. 7 GCPs were captured on stickers the same way. Initial processing this only used 84% of the images, took 2 h 20 m, and the solution failed to converge, throwing out one GCP altogether and giving several metre errors to the other points. Initially I thought I messed up a GCP but after several attempts could not get a successful solution.

I am speculating that the major problem with this type of project is the lack of auto-identifiable match points because the roof surface is a monotonous sea of stone (in one case) or long strips of sheathing (the other roof). I wonder if I brought a bucket of miscellaneous objects and scattered them around the roof if that would help. I also wonder though if the algorithms work better with near nadir photos. On the first roof, I got surprisingly good results on the face of the building across the road. All photos that picked up a part of that building were close to being perpendicular/normal to the face of the building. If that is the case, would I be better off using my telescopic pole (cheap and easy version of a drone) but mount the camera so that it is facing almost straight down?

I realize that this is a long post, but wanted to give sufficient detail. If there is a way to upload sample photos or project report, I would be happy to supply that.

Almost forgot - I started tests in version 1, but all of the results above come from 2.0.67

What is the surface material of the roofs? If each image is too similar to the next/last, Pix4D will not be able to accurately process the imagery. I see this quite frequently when I have bodies of water in my images - the water surface is never processed completely. Perhaps if you have an all-white roof, Pix4D is struggling to find matches and tie points in the images.

One of the roofs had a stone pebble surface and the other had 4’ wide swaths of rubber cladding with a pebbled surface similar to asphalt shingles. Yes - both are monotonous surfaces and difficult to find matches. As a work-around I wonder about taking a bunch of “stuff” just to scatter around on the roof to serve as more distinct auto tie points.