Support Website Contact Support Blog

FLIR Duo Pro R

Hi Reto, 

Yes, I have created a support ticket with Pix4D support, to query why it’s not currently aligning. I have unlimited access to this Camera and am out in the field approximately 3 times per week. We conduct surveys across the UK and are testing this camera out, so when I am on site conducting a survey, I will finish off and then put this camera up to do some test flights. 

The only issue we may see is I do not use ground control points. We survey solar farms and creating an orthomosaic and DEM is not our primary concern, this is a trial to see if it’s a viable delivarable to add to our current offering. If we can accurately and efficiently create a thermal orthomosaic of the site then we can begin adding this to our offering as a bonus feature. 

I currently have two complete data sets for two different sites, taken at 60m and 65m above ground level. They definitely have sufficient overlap because they calibrate and align 100% in Agisoft. The one site, I flew at 65m Above take off, taking an image every 2seconds, 4m/s flying speed, double grid pattern flown with drone deploy as the flight planning app - the overlap was set to 80% however due to the size of the sensor I’m not sure this actually changes anything - typically the flight plans are based off an RGB sensor for example the phantom 4 pro which is a lot bigger I believe. Regardless, I flew a double grid/cross hatch pattern. There is definitely sufficient overlap. I have stitched this set all the way to the orhtomosaic stage in AGISOFT, but here is where I am struggling to export a radiometric orthomosaic. Haven’t done one before. 

Hence I looked now at Pix4D as I know some people have had success with it previously, I am currently using a Trial version but it may expire in the next few days so I hope to be successful before then. 


The second data set was flown at 60m AGL, flying at 4m/s, flights were planned manually with Litchi flight planning. Images captured as Radiometric JPG every 2 seconds. Parallel flight lines, approximately 15m between each line so the overlap is definitely sufficient. The drone was oriented North (solar panels are south facing) and I was flying waypoints along the rows of panels (east to west). Again this imagery aligns perfectly in Agisoft but does not seem to align in Pix4D (this is the set I refer to in my first post above). 

I have contacted Flir Directly regarding the camera specifications and they say it is the same sensor as the Tau2 but they then sent me incorrect data - so I have escalated the enquiry further and am waiting to hear back. 

Hi Mark

Thanks a lot for the in-depth description of your tests, in a few days I will be in exactly the same situation, I guess. I’m using Pix4Dmapper for TGB mapping for a very long time and I’m confident that the 1st grade support will solve this issue too. If you want to drop me your e-mail address, we can exchange test results later as well. Mine is

Regarding communication with FLIR, I can confirm that this is a rather sad story…


Hi Mark,


I have just sent you my processing job for your both imagery thermal and visual which I believe that is entirely satisfactory. If it is okay for you, please share it here (at least some screenshots).



We currently have difficulties in extracting the r-jpeg imagery from FLIR cameras. We have contracted FLIR from our side to see what causes this issue. We believe that the latest changes of FLIR SDK do not align with our algorithms.

As soon I will have news from my side I will comment here.

Hi, as Ina has mentioned she was successful in processing the complete data set. 100% of the images were aligned and calibrated. Took me about 10mins to get the index map from the data once I had those as pix4d is really user friendly. 

I believe the massive difference in how I was processing it when it failed, to a successful output was the following settings were changed- 

Internal parameters optimisation - all prior 

Previous it was set to all. Changing it to all prior changed the calibration to 100% success. 

The other main settings that I have managed to source straight from FLIR support are the following - 

Visible Camera
Sensor width (mm) = 7.4mm
Sensor Height (mm) = 5.55mm
Pixel size (um) = 1.85um
Focal length (mm) = 8mm
Principle point x (mm) = 7.4mm/2 = 3.7mm
Principle point y (mm) = 5.55mm/2 = 2.7525mm

Thermal camera
Sensor width (mm) = 640*17um * 10^(-6)m/um = 10.88mm
Sensor Height (mm) = 512*17um * 10^(-6)m/um = 8.704mm
Pixel size (um) = 17um
Focal length (mm) = whatever camera you have (13mm in this case)
Principle point x (mm) = halfway in between the sensor 10.88mm/2= 5.44mm
Principle point y (mm) = halfway in between the sensor 8.704mm/2= 4.352mm

Below is a screen shot of the processed raycloud and calibrated cameras. 


Here is the digital surface model… This was all straight from the low resolution Tiff files (640x512) and I still feel the quality we gained is amazing. 



If anyone needs any other help regarding this you can ask me but Ina is definitely a wizard who can better assist. If I can I will try. 


1 Like

Hi Ina, 

Just with that example we processed, any idea why the site is somewhat curved south? The site is actually dead straight east to west, but the model is warped as you can see. Not sure why this happened? It’s also really flat with no ridge down the center.



You also mention an issue extracting data from the radiometric jpeg images, would you recommend we capture the imagery separately as Tiff files and Jpg visible images straight away?

The camera is capable of doing this and it seems to work for the processing, however the export gives weird temperature values, they are not recorded in Degrees celsius. I believe there is an equation to derive the celsius temperatures, but haven’t managed to do it accurately as yet. Taken directly from the duo pro r manual === 


The 14-bit TIFF file format of the Duo Pro R camera contains temperature information in the form of pixel intensities. These pixel values can be converted into temperature using the following formulas:

[counts@High Resolution] * 0.04 – 273.15 = deg C

[counts@Low Resolution] * 0.40 – 273.15 = deg C

where “counts@High Resolution” are the individual pixel values recorded using the Temp Range setting of High (-25 to +135C) and “counts@Low Resolution” are pixel values recorded at a Temp Range setting of Low (-40 to +550C)

Hi Mark,


The bending it is usually present in projects that are above flat terrain and low in texture. At the moment I am not at my PC to take a look at the data, but once I am back, I will look again. Do you have the, bending in both RGB and thermal? Having GCP’s should correct this behavior.

The problem of extracting will be self-evident since the software will not be able to read the imagery. But in your case, since it does the firmware that you have on your camera most probably does not affect this. We are still investigating this.

As you have an R version of the camera, the temperature should be absolute. However, this depends on which units have been shot initially.

If you can allow me a bit of time, I will look closely to your data these days and come back to you.




Hi Mark,


As expected the thermal project presents the bending while the visual RGB not. This is due to the low feature present in the thermal imagery and other difficulties that I have mentioned in the post before. 


However, if you are planning to merge the projects remember that you have the option to use the point cloud from RGB and texture from thermal.



We are finally going forward with the Sensor purchase, Since our main objective is to use the Flir duo pro r with Pix 4dCapture or djiGSpro to collect and pix4dMapper to process. At this time is it possible to plan flights using Pix4dcapture and the Flir duo pro r and a m600or m200 (grid flight plans)

If not what are the thermal duo sensors am i able of using with Pix4d Capture and mapper preferably using DJI drones like m200 or m600? The DJI zen muse XTII

Pedro Penedos

Hi Pedro,

The Flir Duo Pro R works well in mapper but not as a rig. Some tests have been done by our team and the camera are not synchronized well enough. We have no feedback about the Zenmuse XT2.

Also, we did not test it but it should be possible to use a Flir Duo Pro camera on a DJI M600 or DJI M200 by implementing it as a custom camera on the iOS version of Pix4Dcapture. You can find how to define a custom camera in Pix4Dcapture here. The important step is to define the time-lapse directly in the settings of the camera, and then apply the same value for the Min. triggering interval [s] in the application. If not, the camera will not take any picture as it cannot be controlled by Pix4Dcapture.

The camera Zenmuse XT2 is not supported by Pix4Dcapture and we are not planning to implement it in our software at the moment. However, the latest versions of Pix4Dcapture are integrating the DJI SDK 4.6 which introduce support for XT2 camera.
It might work if you use the XT2 camera with these Pix4Dcapture versions but we can not make any guarantee.

About processing thermal images, you can find more information on our knowledge base:


1 Like

Dear Gael,

Thanks for the update! Would it be possible to forward the information to Flir about your experience that “the camera are not synchronized well enough”? I think FLIR is also interested in providing greater added value to their products.



New to this thread, but I’m having many of the same issues processing thermal imagery from the FLIR Duo Pro R.  I’m using exactly the parameters and settings specified by Mark Craig on August 16, above, but am still getting garbage results (wildly distorted orthomap, for one).

In my particular case, my UAV flew at a nearly constant 122 meters above the surface in a rectangular grid over a 100 acre field. There’s 361 individual image files, each one nadir-viewing and with reasonably good GPS geolocation.  The surface had some gentle rolling hills, but nothing that should pose any problems, especially when viewing at nadir from that altitude. 

I sincerely believe that if I were able to extract the thermal images individually and print them on paper (I can’t, because the R_JPEG file format is apparently a trade secret), I would have no problem laying them all out on a table and producing a paper mosaic that is “good enough” for my purposes.  Is there a way to tell Pix4Dmapper to do the equivalent in software – trust the geolocation and camera orientation, overlay the images without distortion correction and create a low-precision but reasonable digital mosaic?

Alternatively, what camera internal parameters should I consider adjusting experimentally to see what makes Pix4Dmapper happier?

1 Like

Hi Grant,

We are currently processing your images. We will get back to you soon and will also update in this post.

We were talking with pix4d about this during intergeo’18: we do not need 3D models just the orthophoto. Apparently pix4d fields is able to do this but the two products mapper and fields are licensed separately, and this feature is not included in mapper…


Pix4Dfields does not support thermal imagery at this time.

But you see my point about pix4d making a product tailored to the needs of an application area, I hope! :wink:

We are really trying to consider all user needs and feature requests!

@Grant The best result (though only 19% calibrated images) were possible with:

  1. Use image scale 1
  2. Use geometrically verified matching
  3. Calibration method standard (use this only if the alternative does not work). Alternative calibration assumes that the images are nadir and will fail to calibrate the images if they are oblique.
  4. Use all prior in internal cameras optimization

The images from dual pro R must be processed in two separate projects (thermal and RGB). Pix4D does the splitting of images. The thermal and RGB images the should be separately used in 2 projects (and then merged).

Since thermal images have a very low resolution, it is difficult to extract features and more so when standard calibration has to be used. Thus the images should have at least 90% overlap.

Thank you, Momtanu; I look forward to trying the settings you recommended. 

Note that 90% overlap (at least in the along-track direction) would be impossible for our fixed wing UAV flying at ~15 m/sec, as the Duo Pro R can only shoot at a maximum rate of 1 per second, implying a minimum of ~15 meters of non-overlap.  To achieve 90% overlap would mean ~150 m along-track coverage in each image, which would in turn require us to fly at twice the legal altitude for UAVs. 

Lateral overlap could be arbitrarily high, but the tradeoff is the flight time required to to cover our 0.75 x 1.0 km field – going from 80% to 90% lateral overlap doubles the flight time – e.g., from 40 minutes to 1 hour and 20 minutes.

In any case, I don’t quite understand why the following trivial algorithm couldn’t be optionally applied to nadir-viewing thermal imagery:

1)  Trust the geolocation and camera orientation data attached to the images;

2)  Project the images onto the horizontal plane without calibration, possibly making at most minor adjustments in orientation and/or overall scaling to maximize the gross correlation between overlapping images;

3)  Create the mosaic as the weighted average of the overlapping projected images, with the weights maximized at the nadir point in the image;

Using the above method, it would be impossible to get the wildly distorted images that we had been getting or to have the job quit on account of camera calibration. In our case at least, the above naive and admittedly imprecise algorithm is still far preferable to no usable result at all, which might otherwise be the common outcome for thermal imagery lacking resolution and contrast.




Grant - I don’t think you understand how pix4d works for creating orthos. Not a knock, but we spent a couple years working with pix4d in the early days (before the “thermal” template (we helped create that :wink:

Pix4d doesn’t just take images and crop them together. The ortho image you see is a top down view of an extremely dense point cloud. To create the ortho all the images are processed into millions of 3d points that are flattened into an orthomosaic.

Tools like ptgui can do pure image stitching, but with the low resolution and the high image likeness of solar panels it does not work at all. We have been doing this for 5 years and used and worked with every software vendor in the market. Pix4d is by far the best, but still has its issues.

We have been tau2 users and have recently started using the duo pro-r and results are not good. The duo pro r / xt2 is not as high quality as the tau2 (regardless of what Flir tells you) and this reduces quality makes processing thermal from the duo pro very difficult.

We are still trying with every new version of pix4d, and we know it will get better, but we have gone back to using the tau2.

I’m surprised that the Duo Pro R performance doesn’t (at least) match the Tau2, because I always thought it’s a Tau2 (sensor) inside. But I might be wrong, I have no in-depth knowledge of FLIR internals.

However, one thing I still don’t understand yet: 

Why is there no benefit from the dual camera configuration? Knowing that the RGB and the IR images are congruent/superimposable (of course not with the same resolution/GSD), why can’t Pix4Dmapper use the RGB images for camera calibration and point cloud generation and the IR images only as an alternative point cloud texture and ortho layer?

I always thought that this is what camera rigs (in Pix4Dmapper) are all about.
Obviously I am missing something?