Hey Kestral!
I have some really awesome datasets I can share where I have done all of the above!
This first dataset was captured with a Phantom 4 RTK and PIX4Dcatch on my iPhone 11 Pro Max - No viDoc. I have highlighted the area I added that was captured using PIX4Dcatch. I used PIX4Dcatch to capture the under slab piping being installed.
You’ll be able to see in great detail the piping and how it was run. If you look on the opposite side of the building, there is also under slab pipe being installed but because it was only captured with the P4RTK, the quality isn’t really there to make anything out.
I used Manual Tie Points to align the catch images and P4RTK images together and because this was flown with RTK accuracy, it pulled up my catch images and became one large accurate dataset that checked out against my onsite GNSS test shots. No GCP’s were used.
Capture time in PIX4Dcatch was 45 seconds. Much faster than changing the batteries in the P4 and doing a separate much lower flight.
Next up is my “science project” dataset. I say that because I’m still working on best practices and practical workflows to make this super simple.
The first link for this project was merging the Parrot Anafi flight with a PIX4Dcatch dataset where a tree covered the majority of the courtyard and I wanted to get that missing data. Because of the big tree, this was also processed together using Manual Tie Points as RTK does not work well under trees.
Now, getting to the meat and potatoes of your question, here is a link showing the Anafi flight, the courtyard, but also all the facades captured with PIX4Dcatch, and you’ll notice there are some interior rooms also added in as well.
This dataset was captured and put together using a mix of viDoc RTK, Manual Tie Points, and GCP’s.
Keep in mind this point cloud is very large and it may be beneficial to change the point cloud view settings from adaptive to fixed in order to get a better view.