Support Website Contact Support Blog

Synchronize position of RGB and multispectral projects from Sequoia sensor


I have RGB and multispectral data from a sequoia sensor. If I understood correct, then the recommendation from the manual is to split the dataset in two projects, one for RGB imagery and on for the multispectral imagery. Both datasets process well, but there is a slight spatial shift between both data sets. Whats the best way to synchronize/georeference both datasets to each other? I don’t have ground control points in the field.

Anyone who has looked into this?



Hey Matthias,

I believe the slight shift between RGB and Multispectral mosaics happens due to position of the RGB and Multispectral sensor. You could use QGIS or ArcMap to georeference them to each other after processing is done. You don’t need GCP’s to georeference them to each other however you need to be able to identify same objects in both mosaics. Depending on how big your field is you would be okay anywhere between 5 to 8 points. 

Ok, would there be any benefit of using virtual GCPs during the processing? For example processing first the RGB and deriving GCPs from that and then process the multispectral mosaic? Or would that give the same result?


Hey Matthias,

Using GCP’s would increase the gelocation accuracy but it would also impact your point cloud and orthomosaic accuracy. I am not sure about the virtual GCP’s though. 

Hi Matthias,

Indeed Selim is right, if you would like to compare the projects you will first have to align them using GCP’s or maybe align the RGB project based on the MSP. 

Sequoia has been developed having specifically the agricultural user in mind. The main use case is scouting crops and identifying how they grow. The focus of Sequoia, therefore, lies on delivering highly precise and accurate multispectral data. The RGB camera on Sequoia is an add-on, designed to generate overview images of agricultural fields. The RGB camera, therefore, cannot provide images of the same quality as dedicated RGB cameras like e.g. the G9X RGB can do.
Sequoia`s RGB sensor is mounted with a rolling shutter, while the multispectral sensors are mounted with global shutters. This difference leads to some restrictions on what the RGB results can be used for. 



I agree. However, in our project we want to extract NDVI values for individual trees in sparse forest. These individual trees can only be identified in the orthophoto. Thus a good match between both products would be appreciated. It’s something you could consider for future developement.




Have you tried to process the RGB and the MSP together on our cloud? When processed together the results should be aligned unless there are some problems in the RGB project. 




I have the same problem flying over vineyards. I need to use the RGB to create a mask for raw selection. I don’t understand if you suggest to process together multispectral and RGB images…you talk about the cloud. Can you explain me the correct workflow?

Thank you


Hi Mario,

There are some projects that work if RGB and multispectral images of sequoia are processed together but in most cases, it does not. Thus, it is not very consistent. There will be a slight shift as it is not processed as a rig. It is always better to process them separately (step 1) and then merge both the projects (step 2 and 3). You can then use your RGB for masking.

Thank you,

how can I merge the RGB an multispectral projects? One will be an orthophoto and the other one 4 reflectance maps. Do you have a tutorial for this or can you describe it?


Mario, yes you can. One of our users Michael Koontz  had attempted to merge and it was successful. You can see the comments here:–Desktop-Cloud-Hybrid-Pix4Dmapper-Pro-4-3-31-imagery-with-different-spectral-signatures-one-block-after-subproject-merge-two-blocks-after-cloud-processing

It will be the same as merging two projects together (

If you have any issues do not hesitate to reach out :slight_smile:

Thank you!

I forgot to say I would like to do this with Sequoia four bands and its RGB. Usually the final results don’t fit exactly and I cannot obtain a good mask from RGB for plant selection in multispectral datas.

It is very interesting, but I still have some doubts after a first read:

  1. Can I process everything on my pc or some steps must be done in the cloud? Because I read Michael Koontz uses the cloud for some steps…

  2. After the process can I export one rgb ortophoto and 4 reflectance maps (radiometrically calibrated with panel)? Do you have some screenshoots of this? 



Sequoia RGB and the other multispectral bands should work the same way. Yes, the mask will not work as the RGB is not processed as a rig, it is a separate camera, so they will not be perfectly aligned if processed separately. But merging should work after step 1.

  1. Processing in cloud is an issue for this as cloud will always process all the steps. So it is basically like processing the RGB and multispec in a single project, and not merging. It will only only one template. Processing is desktop will make sure the processing is correct. You can process sequoia RGB with ag RGB template and multispec with ag multispectral template (step 1) and then create a merged project, process step 2 and 3.

  2. Yes you should get RGB ortho and 4 individual band reflectance map. If you are using targets,they will be taken into account for radiometric correction. Unfortunately, I don’t have any screenshots. But you could try with a sample dataset and let us know :slight_smile: