I’ve written a software that detects spray painted GCPs using convolutional networks. The detection works OK now, but I need to go from the pixel position to the real world, 3d position in order to know which GCP corresponds to which detection.
Since there is not (yet?) an API on Pix4d, I’m trying to solve this by reading the pix4d generated camera positions file and then tracing a ray from the camera to the GCP using the detected pixel location. I’m having some problems getting the geometry to work though, it’s not as easy as I thought.
My question is, using Pix4D, is the calibrated camera file, the best approach for going from 2D pixel position to 3D position?
I realize this is maybe out of scope for the user forum, but I figured maybe you can help me out or suggest alternative approaches
Thanks in advance