I am really struggling to get quality/acceptable results when generating the 3D mesh.
I’ve spent a fair amount of time adding surfaces, reoptimising and cleaning up the densified cloud (which looks great and the Orthomosaic is spot on for this test dataset, A single double grid flight, GSD 0.98cm). This doesn’t appear to help the mesh output, if anything it seems worse! Ive spent hours on the support site, trying things out but to no avail.
its a simple building in progress (with a lot of clutter) plenty of visual interest
Im also struggling with this post as I don’t wish it to be a ‘versus’ post but as an experiment I uploaded the raw dataset to Drone Deploy as a check, so no MTP’s, surfaces etc.
The results were significantly better, the Drone Deploy mesh was denser, had detail where it should and dealt with things like clutter and thin items such as scaffold more ‘elegantly’ (flattened spurious bits and pieces etc), textures were handled better in places with less smearing. Better so much so that I was bit taken aback. I would love to be able to get these results (or better) in the Mapper. I don’t like DD because there is no control, its fire and forget stuff, but ‘out of the box’ results (in terms of the mesh generation) is pretty good. Look at the definition of the dormers and the truck, detail on the round window.
below are comparison screenshots.
If anyone could help or at show me what I’m doing wrong (I’d really like to be doing something wrong!)
Drone Deploy version (unsure why they are a bit dark, didnt appear like that in app)