Support Website Contact Support Blog

How many Panono images per acre are needed to model with Pix4D

Ok guys, I tested flying the Panono today on my Inspire 2 and it went really well.  I added a Pepwave Surf on the Go battery powered wifi repeater on the I2 to give me good range on the Panono.   Tomorrow I will test exactly how far it can go.

Does anyone have a breakdown of how many images per acre would be required using only a Panono 360 camera at 100’, 200, 300’ and 400’ AGL?  And maybe a better question would be what is the maximum distance between each 360 Photo to maintain a good quality model using Pix4D?

I am hoping that someone or Pix4d staff has looked into this and done some testing.  Any help will be greatly appreciated.



Hello i started a few Days ago using pix4d modell (trial actually)
I use a panono camera to build VR Tours
I am very interested how you fix the Panono to the inspire and how do you use it with pix4d?

I use a DJI Mavic Pro Phantom 4 and a Parrot Bluegrass

Kind regards
BERND Hoffstedde

Hi Bernd,

I will be making a YT video in the coming days on how to do this.  It is actually quite simple.  The Panono will only GeoTag the photos if you use a smart device (excluding Samsung S7).   I attached the Panono to the front gimbal mount using a very light carbon fiber pole attached to the cap the screws into mount.  

You will need to have one Smart Phone attached to the drone (I am using the Samsung S5).  You will user the TeamViewer app to control the S5 from another android (or ios??) device on the ground.  Once you are controlling the Smart Device being attached to the Inspire 2, then you fire up the Panono, and the smart phone will control it via wifi, BUT still transmit the screen share to the android device down on the ground.  You obviously have to have cellular data on both devices.  The transmission is near real time.

Now I use Litchi to run a mission.  Litchi will pause at each spot for what ever time you specify.  You simply take a picture using the device you have with you on the ground.  Litchi will then fly to the next way point.  

I have asked Litchi to add an audible alarm each time it gets to the way point and stops.  That way when you ear the beep, you can take the photo.  

Litchi is not interested in controlling 3rd party cameras like the Panono or Fusion.  If we can get someone (even Pix4D to trigger 360 cameras (using an onboard smartphone), then we can fly high quality cameras like the Panono and take much fewer pictures to produce a model.  right now it is a manual process but automation of this will become possible sooner or later. 

What I don’t know is how close together do the Panono images have to be in order to make a high quality model??

Maybe Pix4D will chime in and let us know.  I could just start testing but I would rather them tell me if they already know.  I just got thru testing this whole setup and it works very well.  

So even if you do not wish to make a Pix4D model, if you wanted to take high quality 360 panos for a sky-tour (ie 1 pano every 5 acres) then you can setup a mission to run and take all of your pictures very quickly.  So before flying my Panono I could only do 2 high quality 360 photos on a battery, maybe 3.  Now I can do 20 or more.

It really does not matter whether you are flying a panono, or some other camera, you can attach a smart phone to the bottom of your drone to control a 3rd part 360 camera at any distance because you are using screen sharing to handle that.  Gota love technology!

I will post my YT video link here when I get it done.  It will the end of next week because I will be tied up until then.

1 Like

Hi Tim
Perfect idea. I really want to try it. I am curious about your video.


Impressive setup, I have not heard of a similar setup so far and we have not done any testing on our side for a similar use case. Only few users asked us about the Panono. Most applications we have seen for spherical cameras is for indoor reconstruction (click herefor an example). In confined spaces such a camera helps to have a good overlap between images and cover the area. Although, I know it can be tough to find the right distance between shots to get a good reconstruction and it needs some iterations to find the right balance. It’s tricky, because the 360 images give a false impression of high overlap between images. Typically, images get taken every few meters (1 to 3m).

Were you able to create a model in Pix4D software with the images you have acquired? Make sure you have an equirectangular image format to process the project. Hereis an article that could help.

Also, could you go more in depth on the use cases you are looking to cover with such a setup?
For now, my understanding is that you are looking to take less images for the same area. 

I’m really looking forward to seeing your video, please share it here once it is published.