Support Website Contact Support Blog

Polyline in Raycloud: Statistical meaning of "error" in terrain 3D length?

We’re doing morphometrics of wild elephant.

For a polyline in the Raycloud:

I would like to know what kind of “error”, in the statistical sense,  is stated next to Terrain 3D length?

Standard error?

For your amusement: This is what a trial project looks like:

we don’t use drones, we fly in fixed wing aircraft ourselves :slight_smile:

Wynand Uys

Hoedspruit

South Africa

 

Hi Wynand,

Thank you for sharing a screenshot of your reconstruction.

  1. I trust you are aware that photogrammetric processes rely on a static scene while you are capturing your images so please consider the possibility that some error may be introduced into your measurements if the elephants move while you are capturing your photos and that you should capture images as quickly as possible to minimize the possibility that the elephants change position.

  2. The +/- error that Pix4D Desktop reports for your polyline measurements is a summation of the theoretical error of each of the sublines that define your polyline, which is defined by the maximum theoretical error of the subline’s vertices. It looks like your polylines only comprise a single subline, so the theoretical error of your polyline is defined by the theoretical error estimation for the two manual tie points that define your polyline’s vertices.

You can see that Pix4D Desktop reports the theoretical error for all manual tie points in the properties menu on the right side by selecting one, and you can learn more about how Pix4D estimates the theoretical error of a tie point at 3D error estimation from tie points.

  1. Your screenshot indicates that the elephant was approximately 1.94 meters tall. It is difficult to estimate the scale of the elephant based on the screenshot, but I want to ensure that your project is accurately scaled because the elephant looks like one of the larger of the group but is still shorter than the lowest average height of a mature African elephant.

You can ensure that your project is scaled accurately by incorporating image geotags or ground control points.

Do not hesitate to let me know if you have any questions.

Thank you Andrew!

  1. Static scene:  Yes, this is a challenge.  Fortunately the camera moves fast: 40m /sec, so in 10 sec we can capture 50 + images from a curved track that is 400m long.  When we examine the images we can see which animals moved during that time and discard them.    We have had good success doing only one or two elephant in a scene (close-up) but we’re pushing the boundaries now, hoping to do 10 or 20 in a batch.  We have to get about 1 000 on record.  We do multiple passes for every group so if an animal moves in one attempt we’re can catch it stationary during the next pass. 

  2. Thanks for explaining the +/- error

  3. So far the projects seem to scale quite well. The shoulder heights of the elephant are what we expect. There are a lot of youngsters and no mature bulls in the screenshot, so 1.94 m shoulder height is about right for that particular animal. Some of the others in that same model came out at 2.5 and 2.7m and they are clearly mature cows. Thank you for going to the trouble of looking up average heights of elephant.  The folks I’m working with have measured thousands of elephant in various parts of Africa and they’d be quick to recognise a scaling problem. We cannot do GCP’s because we don’t have anyone on the ground in the vast area’s that we work in but we are taking steps to improve our image geotagging.  The biggest improvement is using a logger that records GNSS positions at a high frequency and using a camera hot-shoe adapter to log the times of shutter activation to the millisecond. In one millisecond the aircraft moves 4cm. Previously, we had loggers that recorded GPS positions at 1 sec intervals and we had trouble getting precise times of shutter activations, so the geotagging was off by meters and even 10’s of meters.

Below, a summary of the measurements from that project, pasted back unto one of the images.


 

Oh, sorry, the image is downsampled too much to read the labels. You’ll just have to believe me that the results are entirely plausible.

Hy Wynand,

Amazing project!

As you say, the more precise the geotags are, the better accuracy you will get.

Thank you for sharing it.