When Step 1 processing produces more than 1 block, what is the best way to ensure that I’m adding manual tie points in a way that will help stitch those blocks together?
When I’m merging subprojects that exactly overlap with each other (same survey area, but cameras with different spectral/spatial resolutions) and 2 blocks are created, it is obvious in the rayCloud that there are two “layers” to the point cloud and putting the manual tie points evenly spaced throughout the survey area enables Pix4D to properly stitch those layers into a single block.
What if more than 1 block is created in a project where there isn’t this obvious “2 layer” effect? Is there a way to visualize what areas of the surveyed area have been assigned to each block, so that I can create manual tie points along the seams between them and stitch them together in a single block?
I’ve attached an example quality report overview showing that 4 blocks were created (Figure 1), and the layout of the image captures showing an even spatial distribution of the images (Figure 2). This project was processed with a keypoints image scale (in Step 1) of 2. The same project was processed with a keypoints image scale of 0.5, which resulted in 2 blocks. Figures 3 and 4 are the equivalent parts of the Quality Report as Figures 1 and 2, but for the processing that used the keypoints image scale of 0.5.
Figure 1. The Quality Report overview of the processing. There does appear to be a bit of a streak of different reflectance in the left hand side of the “Preview”. Maybe different lighting conditions for that flight? Maybe that’s creating different blocks?
Figure 2. The spatial distribution of the image captures and the manual tie point positions.
Figure 3. Same as Figure 1, but with keypoints image scale (Step 1) set to 0.5
Figure 4. Same as Figure 2, but with keypoints image scale (Step 1) set to 0.5
Would you share the type of camera that you are using? Did you fly different flights at different times or you are using a rig camera like Sequoia or Micasense?
Would it be possible to share the complete Quality Report?
I’m using either a DJI Zenmuse X3 or a Micasense RedEdge3. Sometimes I’m creating a project that uses imagery from both cameras.
All of these flights were done on the same day. The X3 and the RedEdge are both mounted on my aircraft, and both cameras are capturing images simultaneously (the X3 is being triggered by the flight planning software, Map Pilot, and the RedEdge is on the Timer trigger set to capture images every 2 seconds).
I can definitely share quality reports. These are all over the same study area and use the same photographs.
I changed the keypoints image scale parameter for Step 1 on the merged project from 2.0 to 1.0, uploaded it to Pix4D Cloud, and got 1 block, but the orthomosaic wasn’t generated I assume because the point cloud wasn’t generated well enough (it’s weird that the Point Cloud Point Density for Step 2 was changed to “Low” from “Optimal”. I didn’t make this change-- was this change made during cloud processing for some reason?): https://www.dropbox.com/s/4r30vxf2jyxylfm/eldo_3k_1_report.pdf?dl=0
Hopefully this all gives some context for how I came to ask the question, but really the question is pretty broad: how can I see which blocks the automatic tie points have been classified into so that I can effectively place manual tie points?
It’d be great if this were a “Display property” in the rayCloud (Tie Points > Automatic > Display Properties > Color by block designation or color by original reflectance)
A follow up here about the Point Density being changed from “optimal” to “low”. I suspected that I might have just been wrong about the project parameters that I uploaded, and that does appear to be the case. I must have switched the Point Density parameter inadvertently by toggling between templates or something. I’ll try to upload it again using my desired parameters and see what happens.
I am not sure If I understand well. It seems that you mount two cameras together (Red Edge + RGB X3), you do two different projects and then merge them.
By looking at the last three Quality Reports, the last one where only one block was created shows many more connections between all of the cameras so Image Scale 1 produced a better tie point extraction and the two blocks connect much better.
As for the point cloud, in Step2 the user can define which bands or cameras are used. The recommendation should be using the RGB camera for the point cloud and mesh geometry as it is the camera with more resolution.
I do not have a project with RGB + RedEdge cameras but I do have a project with an RGB + a thermal camera so you can see where and how to select what you want to use for each case:
In your case, you have selected all of them:
If you select only Group1 which is the RGB camera for point cloud generation and mesh geometry, I think that the result will get better.
I’m curious if there is an answer to Michael’s original question on how to determine which sections of the point cloud belong to distinct blocks? As he said, it’s not always easy to determine where the separate blocks are located.
It would be great to be able to colour camera positions by their block in the Ray Cloud. However, I think this is what it does in the Quality Report in the Diagram for 2D Keypoint Matches, which you can then visually match up with what is going on in the Ray Cloud.
It is possible to visualize the several blocks in different blocks in the quality report, here is an example of how you will visualize:
About being able to visualize in the RayCloud, I am afraid that we don’t have an update on this feature request yet.
Adding this feature is currently not on the roadmap. However, your suggestion will have more weight if you share it on our Community and collect more votes from other users.
Here is how you can do this:
Go to the Feature Request section of the product for which you are suggesting a feature.
Log in or create an account, if you don’t have a Pix4D account yet.
Look for the feature you are interested in. Someone else might have already asked for it. If such request doesn’t exist, create it as a new topic.
Vote for it.
Done, your vote is counted!
You can manage, follow-up or even remove your vote(s) if you change your mind.
These cookies are necessary for the website to function and cannot be switched off in our systems.
They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences,
logging in, or filling in forms. These cookies do not store any personally identifiable information.
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site.
They help us to know which pages are the most and least popular and see how visitors move around the site.
All information these cookies collect is aggregated and therefore anonymous.
If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
These cookies may be set through our site by our advertising partner (Google).
They may be used by Google to build a profile of your interests and show you relevant adverts on other sites.
They do not directly store personal information but are based on uniquely identifying your browser and internet device.
If you do not allow these cookies, you will experience less targeted advertising.