Support Website Contact Support Blog

Accounting for shadows from clouds in creating orthomosaic/point cloud

I am running my Pix4D outputs through a classification algorithm outside of Pix4D that classifies objects based in part on the RGB values of the points that constitute them. My classification runs into problems when there are clouds casting shadows onto the objects in some of the photos-- they appear to discolour the orthomosaics and point clouds, making them darker in colour then the same object looks in full sunlight.

I am curious if anyone has found a solution to this problem before? Ideally, there would be a tool/function in Pix4D to auto-correct colours to account for clouds. But if no such thing exists, maybe someone knows if Pix4D can be calibrated to favour the brightest colour value of all of those tied to a point to choose to project in the orthomosaic/point cloud? Because in many cases there is at least one photo of a point that isn’t being shadowed by clouds and it’s RGB value would be preferable to any of the the others.

In addition, if anyone knows what function Pix4D uses to automatically decide what RGB value, of all the values in all the photos tied a point, is displayed in the associated point cloud pixel (e.g., the average of all RGB values of all photos associated with a point), that would be stellar information to have as well.

Any insights would be helpful.

Hello Patrick,

In general terms, in order to give a color to a particular pixel, a weight is given to some of the input images. According to the weight each image has, it contributes more or less to the coloring of the orthomosaic pixel. The weight depends on the position of each camera and the distance between the image pixel and the image center. 

There are some other algorithms applied but this is the what is related to what you ask.

Dealing with clouds is always tricky and it is not easy to solve.


Hi Daniel,

This is helpful, thank you.

In regard to the other algorithms you refer to, is there any recalibration of reflection levels? If so, during what stage of processing does it take place (e.g., during point matching) and is there any option for us to modify the parameters?

If there is no recalibration of reflection levels executed during processing, is anyone at Pix4D aware of their users recalibrating their images before putting them into Pix4D, using a sister software? We have toyed with the idea of using Photoshop but are interested if there’s a remote-sensing-specific software out there that we haven’t been able to find.


Hi Patrick, 

Sorry for the delay and thank you for your comments.

First of all, we do not recommend to use Photoshop as some filters/processes can even modify the geometry of the image.

As for your question, I am not sure if I understand it well but I think this other post can help you: 

Pix4D uses a custom color alignment algorithm. This algorithm accounts for both exposure mismatches between images of a project (global offset and gain per image) and camera-related defects such as vignetting (that result in pixel-varying correction terms). While this problem can seem complex at first, we exploit the results of the geometric computations of step 1 (mandatory) and step 2 (when available) to make it tractable.

I hope this answer helps you.