Thanks for posting, Adam. This helps to see what you are seeing with the different systems processing the same data. This is a bit alarming, as I would think the end results should be the same independent of processing hardware, given identical inputs and software settings.
Having said that, are you sure that it is memory that is the root cause here? Each system is very different in several other ways as well, which could have equal impact on processing, namely the CPU and GPU subsystems are very different from machine to machine here (I7 to Xeon, Geforce to Quadro to Tesla). Those are a lot of variables in the mix to pin this down on just quantity of memory being utilized.
You made a comment before that photogrammetry is not an exact science. I would argue that, in the past, this comment may have held more weight with the human element involved in the mapping process (thinking old school stereoplotter equipment here). But with these newer processes, I would think that the end results should be more repeatable, particularly coming from the same set of data as a starting point.
I would be interested to hear from Pix4D staff as to a likely explanation for the variability of results you are showing. One general problem for me as a professional land surveyor is that I do not have a thorough understanding of how the inner workings of the software produce the end results we are getting. Some of this will not be known due to trade secrets issues. So in general, I have to do thorough empirical testing to ensure that the end results are acceptable to my needs (and I have done so). Ultimately, I have to be able to defend the work I am signing off on, and this ‘equipment based variability’ in processing really throws another wrench into the works for this subject.