I have undertaken a drone survey of a graveyard to map grave plots and am having trouble removing overlying vegetation. There are a number of areas in the graveyard where trees or ivy are covering underlying grave plots in the orthomosaic that I have generated in Pix4D.
Below is an extract of the orthomosaic.
It can be seen in the clipped point cloud below that there is a grave plot almost completely obscured by the tree in the top of the image and another against the ruins of the church which is completely obscured by ivy in the lower left quadrant of the image.
I have tried editing the point cloud, however, having successfully removed the vegetation from the densified point cloud, Pix4D puts back a lot of the overlying vegetation when generating the orthomosaic, see below.
I have also tried regenerating the 3D textured mesh after having edited the point cloud and again, although both the point cloud and the textured mesh have the vegetation removed, when the orthomosaic is produced the vegetation is reinserted.
After editing the point cloud and assigning the points in the Deleted point group, you will need to reprocess Step 3 so that these points will not be used.
check here: https://support.pix4d.com/hc/en-us/articles/202560499.
Nigel, note that the individual photos and any mosaic made from them, are entirely different things from the point cloud or mesh. At high-resolution the points or mesh can appear ‘photo-like’, but they are 3D objects. The terrain model they create will be used to geometrically rectify the individual images, to scale them, but that is their only contribution to the process. They don’t even need color to do this.
The final orthomosaic is the stitching together of the actual photos ‘draped’ over the terrain model created by the 3D data.
This will become more clear if you take a look at the tutorials on making orthomosaics. The good news here is that since 3D information is being generated underneath this vegetation, that means there are images that have a clean perspective of your subject. You are going to have to manual edit the orthomosaic by adjusting the seams that were automatically created by the software. This is a very common operation in created mosaics and the tools within Pix4D are easy to do.
Just read up on what the orthophoto actually is and this process will make a lot more sense. Do that and I think you’ll be ok with this project.
Steve, thanks so much for your help!! I was going round in circles and not getting anywhere. I assumed that by getting rid of the overlaying point data and/or 3D model that this would automatically influence what would be used by the software to generate the orthomosaic. As you correctly pointed out, I was concentrating on the wrong stage of the process. I have been working with the Mosaic Editor tools and achieved what I wanted.
Nigel, glad it worked out. There was a little miscommunication and we couldn’t leave you going round and round.
Have you found the Ortho and Planar toggle in the Mosaic Editor? For cutting in manmade objects with distinct edges and objects with height and such, planar can sometimes improve your results. As you learn about this process you’ll find you may give up a little spatial accuracy by using planar, but not get distortion if there is an object or small image patch you are having trouble with.
Not advocating use of planar all the time, just as a great tool when necessary.
i have a similar issue like Nigel did. Is it possible to use photos that are taken by the angle of 45° in orthomosaic editor to get image below the tree in orthomosiaic?
Orthomosaic is a 2D map where perspective of the camera is corrected. You may try to use oblique images in certain regions of orthomosaic using Mosaic Editor. However, note that you may introduce distortions and it is not recommended to use planar images for mosaics dedicated to measurement applications. For more information: https://support.pix4d.com/hc/en-us/articles/202557529
I’m doing exactly this and I would like to check if this would be the current workflow:
Run step 1 and then step 2 with classification (use full res) - 3D model is dispensable
Select high density vegetation and move it to the “disabled” group
Run step 3
Does that sound about right? I tried it before and it gave OK results but I was running low res classification and I noticed quite a few points of trees remaining.
I also have the same question.
But if I want to eliminate all the vegetation before running step 2, so my point cloud does not also show the vegetation, which workflow I should apply? Thanks
These cookies are necessary for the website to function and cannot be switched off in our systems.
They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences,
logging in, or filling in forms. These cookies do not store any personally identifiable information.
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site.
They help us to know which pages are the most and least popular and see how visitors move around the site.
All information these cookies collect is aggregated and therefore anonymous.
If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
These cookies may be set through our site by our advertising partner (Google).
They may be used by Google to build a profile of your interests and show you relevant adverts on other sites.
They do not directly store personal information but are based on uniquely identifying your browser and internet device.
If you do not allow these cookies, you will experience less targeted advertising.