Support Website Contact Support Blog

problems with datasets between drone and handheld camera

Hi support! Been looking around here for a while as i lean the trade of 3d photogrammetry and so this is my first post as no wi am stuck! I am focused on VR walkthroughs so need to capture the land but want high quality buildings as well.

As my 3rd test subject I am trying to 3d scan a small castle on a hill, the area is large so the castle itself looses quality when you zoom in and as such my plan is to use the hand held camera to increase the quality of the main object (castle)

I am using a drone (DJI Phantom 4 V2) and Camera (Nikon D5600 with AF-S 50mm F1.8G lens and Solmeta geotagger) - so i have full GPS data for both Drone and Camera.

However when i import the photos from the shoot into Pix4D the location of the images from the Camera come in as inverted / upside down and far away from the drone data as in my image…

I tried to set MTP’s but when i click a point derived from the camera (inverted castle) the photos from the drone castle dont show up so i cant cross tag them  I also tried doing two sub projects, one for each, but when i merge them it looks the same as the image here basically

Not sure why i am getting this outcome

i am also attaching a image from the drone and one from the camera so you can see the metadata from each - only thing i can see is that the DJI has GPS altitude of 805.069 whereas the Solmeta GPS has altitude of 881 but still doesnt really explain why all the camera data is inverted either

when i process all the hand held camera data on its own the project comes out ok / normally its not inverted or upside down

Hi Karsten,

The images that are saved on the drone’s SD card are geotagged by DJI. Lat./Long. coordinates are reliable in the image EXIF, however, there might be some inaccuracies for the altitude depending on the location where you are mapping. We made testing in our office here and we found that the vertical coordinate is off by several meters that can reach an error of 100 meters. Some users noticed the same. Note that this is just an offset meaning that within the model, the accuracy is not affected, only the absolute location is. Speaking about Solmeta geotagger I can’t provide you with any information as we have never tested it from our end. You would need to check which dataset has more accurate data and change the wrong one accordingly. 

In order to fix uncertainties you are experiencing, we always recommend process each flight separately and merge them together (in your case create 3 subprojects) and use ground control points (GCPs). Please have a look at our article How to align projects, section - “If there are no common ground control points GCPs present in the different projects”.

Cheers,

thanks for the help here ! trying to figure it out as i go along… we are trying to build up a large archive of 3d locations to share online, so are looking for ways to make it as easy as possible to get amazing quality when both zoomed out and zoomed in so will be doing a lot of multi camera shoots per location.

is there an “easy way” to add GCP’s that would work across a multi camera shoot - i have GPS units on the Drone, my Camera as well as the tablet i use for controlling the drone itself (using a Garmin there) and thought that would do the trick but i guess not perfectly…

problem with using Tie Points to Ground Points matching is that the Camera shoots dont necessarily share the same data as the Drone shoots since they are facing the object, horizontally, while the drone is facing down from the sky vertically…

sorry also forgot to ask - why would one data set be INVERTED when both are processed at the same time in Pix4D

when i process them individually they both come out fine…

Hi Karsten,

The idea of the project you are trying to accomplish sounds very interesting. However, at the beginning can you tell me if the absolute accuracy is important to you? If yes, you would need to use precise GCPs anyway. If the relative accuracy would be enough, you can use only MTPs.

Nevertheless, the dataset you obtained can cause some problems as here we are dealing with two datasets that have large discrepancies between each other, and we do not know which one has correct coordinates. 

The best way in those kinds of projects will be to have an overlap area between nadir and oblique datasets if you want to merge the projects. Otherwise, the two blocks are created. For more information -  Merging projects in Pix4Dmapper.

In sum, the decision what needs to be done and how depends on your requirements (absolute/relative accuracy) and the choice which dataset has proper coordinates information.

Best,

thanks beata

i beleive the issue with my inverted part of the shoot is a hardware issue with my gps getting anew gps and will see if that helps but have tried with nikons gps from my phone and still have a simiular issue of merging the scenes

relative accuracy is fine - we are just trying to get the best quality possible in both the building and the surroundings but having a hard time tieing them togeather - as its hard to get overlaps when the drone is shooting top down but the camera is shooting horizontaly 

even if i use GCP’s they would help the drone, but its hard for the hand held camera to pick up since they are on the ground 

we are trying to create virtually explorable locations, so need a way to get the best of both - hand held cameras give amazing detail but have limited scope and drones get great locations but have limited quality!

any ideas or suggestions on how to combine drone + DSLR would be great

so i guess the question is whats the best way to get a overlap area between nadir and oblique datasets… when shooting buildings like castles, mansions etc 

Hi Karsten,

In my opinion, the best way to get the overlap between nadir and oblique datasets would be as follows:

  1. Fly the area by drone and obtain both; nadir and oblique imagery - circular mission around the castle -> process all the images together as a subproject.
  2. Take pictures horizontally by the camera you have -> process them as the second subproject.
  3. In each use, MTP to obtain relative accurancy.
  4. Merge subprojects.

I believe that could help achieve your goals.

Best