Export calculated camera orientations?

Hi folks,

I’m trying to determine the best way to identify the source photo(s) that contain a given model view from an export. I’m exporting 3D models to OBJ/PLY format and that’s working very well. But I’d like to back-calculate the photos that contain a given portion of that model (so, while viewing the model, I could load the best photo that contained that viewpoint).

Pix4D can export the cameras, but that output only includes the lat/long/alt values. But clearly in preview mode you can see each camera and its viewport, once the input has all been processed at least through the first pass. Is there any way to export the orientation / viewport details for each photo? Camera Parameters and Undistorted Images don’t seem to contain what I’m looking for…

Bump. Is there no way to do this?

I’d like to bump this one as well - exporting the position AND orientation of every photo collected would be very helpful for an application we are working on currently - is there a way to do this?

 

Ah, silly me - should have dived into the processing files. …

Looks like camera position and orientations are given in the “project_calibrated_external_camera_parameters.txt” file unter the “1_initial” directory.

However, the one question is whether the omega/phi/kappa are with respect to WGS84 or the mapping projection?

 

I doubt you’ll find an answer here - I’m not sure why this forum exists if simple questions can go a month unanswered. But I’m about to determine the same thing (maybe for the same reason). I’ll share with you here what I find out.

It seems to me the answer is it depends on the file you look at. project_calibrated_external_camera_parameters_wgs84.txt appears to be in degrees while project_calibrated_external_camera_parameters.txt is in OpenGL model units. I haven’t tried to actually use them to confirm this but the scale of the numbers, at least, seems to support that. I originally thought they were radians but I have a file with a double-circle pass of photos and the values are all -1 < v < 1.

 

Dear Chad and Robert,

We apologize for the very late reply. We are aware that many posts on the Forum remain unanswered and we are working on the best way to proceed to deal with the Forum posts and the emails.

We encourage users to post to the forum as other people having the same problem might help you and other users can benefit from the answer. However, if a rapid and precise answer is required on a technical question, such as this one, we suggest you contact our support team directly (support@pix4d.com).

Regarding your question, you will indeed find the positions and orientations of your calibrated cameras in the following .txt files:

  • For X, Y, Z (output coordinate system) and orientations in degrees (output coordinate system): …\project_name\1_initial\params\project_name_external_camera_parameters 
  • For Long, Lat, Alt (WGS84) coordinates and orientations in degrees (output coordinate system): …\project_name\1_initial\params\project_name_external_camera_parameters_wgs84

Then the orientation parameters should have the same values in both files.
More information about the parameter files: https://support.pix4d.com/hc/en-us/articles/202977149

The output coordinate system depends on your input (images geolocation and GCPs). If you do not have any georeference (images coordinates or GCPs), then the output coordinate system will be arbitrary.

Regarding the possibility of selecting the best photo that contains a viewpoint for the 3D model, currently we do not do it in Pix4Dmapper.
This is something we do for the orthomosaic (using the Mosaic Editor: https://support.pix4d.com/hc/en-us/articles/202560079) but not for the 3D model.

Again sorry for any inconvenience due to our late reply.

Regards,

 

Can the cameras please be added to the fbx file that is generated.

Dear Jason,

I have added your idea to our Suggestion list. Our Product Development Team will consider it for future version of the software.

Best regards,

That would be great. It would remove extra steps and software needed to do tracking and photo matching for CG work.

Do you have an appropriate area to make feature suggestions / requests - or is this that place? We’ve just discovered your image-annotation feature and are starting to use it pretty heavily. We’d love to explore ways this could be partially automated but it’s not clear in the camera/photo data files how this information is stored. (For instance, we know the expected colors/color ranges of the objects we’re imaging ahead of time, so we could pre-filter by color if we knew how those data formats worked.) Happy to move this question to an appropriate location.

Hi Chad,

 

You could post your future suggestions under the category: 

Projects, Experiences and Opinions

https://support.pix4d.com/hc/en-us/articles/hc/en-us/community/topics/200278225-Projects-Experiences-and-Opinions

 

Currently, the image annotation can not be done automatically. Your idea is very good! I have added it to our Suggestion List. 

 

Best regards,