Maximum computer for time reduction.

Hello All,

@Tom: As expected, the more expensive the GPU, the better it is, since it has more CUDA cores (workers) and more RAM. Both of these factors are important to reduce processing time, since the RAM will load the images in the memory of the GPU and the cores will compute. Having said that, we do not believe that the most expensive option will reduce processing time so much to justify the difference in the price. Also, what you should keep in mind is that it is important to have a balanced machine so as to get the most out of all the components. Considering the rest of the specs you mention, we believe that the first or the second GPU options will benefit you with reduced processing time.

@Steven and John: We have noticed an issue with processing time during step 3 when using Quadro GPUs. After investigation, we have believe that the issue is solved by changing some options in the Nvidia Control Panel. You can find more information here: https://support.pix4d.com/hc/en-us/articles/218195063.

Best regards,

 

 

Thanks @Pix4D, for the comment.

I already decided to go with the GTX1070. My goal is a “very good machine” and the GTX1070 seems to be reasonably comparable to the GTX980 and Titan Black. I will have the new machine in a few weeks and am looking forward to seeing how it performs relative to my existing computer (32 hours to fully process about 1800 images).

what is better for PIX4D  Quadro or GeForce ?

nVidia GeForce!

Pix4D Support:

I purchased my new desktop that I described above - very happy with it - it reduced my project processing time in half, at least.

Now upgrading my laptop for field processing, and I have another GPU question:

I am considering two different MSI mobile workstations. Both are identical except for the GPU cards:

Both have Intel i7-7700HQ processors (4 cores, 2.8 GHz), 16 GB RAM, 256 GB SSD, 2 TB HDD.

The only difference is one has a GeForce GTX 1060 GPU, the other has Quadro P3000.

Both GPUs have 1280 CUDA cores. The only difference is GeForce vs. Quadro technology.

I have read about slower processing some people have experienced with Quadro cards, but apparently that can be solved with driver settings.

Is there any reason to favor GeForce over Quadro in this case, with all else being equal?

I would prefer the model with the Quadro for some minor features unrelated to running Pix4D, but optimizing for Pix4D is my priority so I would get the GeForce model if that is a definite advantage.

Thanks for the advice.

Hi Tom,

In general we do favor Geforce GPUs as they typically offer a better performance/price ratio.  The Geforce GPU does seem like a much more powerful GPU, though for Pix4Dmapper this would only speed up step 1. We do expect a slight increase in performance, but it would certainly not cut processing time in half.

Best Regards,

Following up on my previous comment, yes my previous slow run with Quadro K620 was due to the settings on that card, and it sped it up considerably when the settings were adjusted.  Interestingly, I now have 2 PC’s similar specs, same video card, 16GB ram 3.6 GHz processor - but one is Gen 4 intel, the other Gen 7.  I ran the previous 1000 image PIX4D file on both, with identical processing options - they finished after about 7 hours at exactly the same time - within a minute of each other.   No difference between Gen 4 and Gen 7!

Helo all…

Just want to share my experience using ryzen platform

My workstation is
AMD ryzen 7 1700
Adata XPG DDR4 PC2800 32GB (2×16)
Asus Prime B350 A
Intel SSD PCIe M2 256GB
Zotac GT 1030 2GB DDR5

I just finish process about 3400+ image from my P3P 12MP of area around 200hectare aprox 44hour all 3 step including gcp’s and my templete is 3d map all setting default.

Just wondering if my project process on i7 platform would it be faster or slower

Anyone with close experience please share… I had to give a recommendation workstation to my company and they ask me that question…

Thanks…

Hi Ary, this post might be of interest to you: AMD Ryzen 7 computer for Pix4D

Brian also just started a new benchmark post on which you might want to keep an eye for hardware comparisons: Pix4D Computer hardware community benchmark

Hola chicos de Pix4D.

Os comento mi problema.

Llevamos en mi empresa intentando hacer una estación de trabajo buena y actual para el procesamiento de mapas con mas de 2000 imágenes, por lo tanto necesitamos una buena maquina.

Todo lo que hemos montado termina dándonos problemas por algún lado, generándonos cuellos de botella. Querríamos evitar esto.

¿Podríais hacerme una recomendación actual de un ordenador completo que funciona lo mejor posible con vuestro programa?

 

Un saludo y muchas gracias de antemano.

@Ina @Pix4D Support

I am wondering if Pix4Dmodel isn’t the right product for this project I’m working on.

I am using Pix4Dmodel and am trying to process a dataset for 340 acres (138 hectares), ~30,000 images and ~350GB file size on a Windows 10 machine with these specs, processing off of the SSD:

  • CPU: Intel Core i7-6950X (10 cores)(Overclocked to 4.1GHz)
  • GPU: MSI GeForce GTX 1080 Ti Founder’s Edition
  • Motherboard: Gigabyte GA-X99 Ultra Gaming
  • RAM: 128 GB (8x16GB) G.Skill DDR4 Trident Z 3600Mhz PC4-28800 CL17 (17-18-18-38)
  • SSD: Samsung 850 EVO 2TB 2.5-Inch SATA III
  • HDD: HDST Deskstar NAS 4TB 7200rpm SATA 6Gb/s, 64MB cache, 3.5"
  • Liquid Cooling: Corsair Hydro H60 Liquid CPU Cooler
  • Case: Thermaltake View 27 Gull-Wing Window ATX Mid-Tower Chassis
  • Fans: Termaltake Riing 12 RGB 120mm x3
  • Power Supply: 750W Thermaltake SMART Series Power Supply 80+ Bronze Certified, Active PFC, SLI

I am getting processing times of approximately 10 days to complete Step 1 ‘Initial Processing’ and Step 2 ‘Point Cloud and Mesh’. The result is terrible quality (see here on Sketchfab: https://skfb.ly/69DQx), as I believe the point cloud was restricted to 1 million triangles. When I alter processing options to eliminate the 1 million triangle restriction for Step 2 3D Textured Mesh by adjusting 3D Textured Mesh>Settings>Custom>Decimation Criteria:Qualitative/Sensitive, it appears that the processing will take over 4 weeks to complete.

This timeline is obviously not feasible to iterate the project as needed. Please help!

Instead of Pix4Dmodel, should I be using a different product? Perhaps mapper or BIM are more robust?

I have a 2nd GeForce GTX 1080 Ti Founder’s Edition arriving tomorrow that I will install in SLI to see if that helps, and am considering upgrading to Intel’s i9 7980xe 18-core CPU…but before I spend the money on the new chip and new motherboard/power supply/cooler, etc I want to have some reasonable belief that the upgrade will shorten processing times.

@Pix4D - please help!

Hi Brad

Likely will see processing times decease but a question , will it pay for itself in appropriate time, if yes why wait?

Muchas gracias por la respuesta chicos, le echare un ojo a todos los componentes, aunque me espero un desembolso exagerado. ya que estoy viendo componentes como el ssd de 2tb que me parece desproporcionado. Cual es el motivo de esto? No seria mejor tener un ssd de 1tb por ejemplo y cada vez que terminas un proyecto, guardar todo en uno disco duro normal y borrar todo del ssd para el siguiente proyecto?

Sabéis por que el Pix 4D se me cierra por problemas de windows en multitud de ocasiones? Esto es algo normal si se toca el ordenador mientras se procesan las imágenes?

Brad,

The different flavors of Pix4D basically have different interfaces…I only use Mapper Pro but the code behind it is the same.  I don’t think using a different version will change the time frame.

The computer you have is very good and moving to the Core i9 will do some for you, maybe 10% to 15% when I moved to the Core i9.  The dual video card is something I haven’t tested so that will be a very interesting test as that is my next move to decrease time frame.

The mesh quality on a project that size will never be good, especially with the file size limitations on Sketchfab.  Pix4D is not the best tool for a mesh…I stick to just point clouds but everybody here has different workflows for different customers.

Unfortunately such a large project (look at total gigapixels) doesn’t surprise me to take weeks with high quality settings in Pix4D.  You can turn down the settings to be faster but in turn the accuracy and quality will be reduced.  I haven’t had any luck finding other software to run faster at high quality settings.

The best option for you is either Pix4Dengine (haven’t tested yet) or running multiple computers doing sub-projects in parallel.  If you run 10 sub-projects on 10 computers then you will significantly reduce the time, but of course that is $60,000 worth of hardware.

@ Brad. did you mention what you are trying to achieve? If you are doing 138 hectares, are you doing mapping, or what? If you are, then you should be using a 3D Map, not 3D Models function.

And then there is no necessity to use 30,000 photos in an area that size either.  Some of the quarry and mining sites we do are around 130 hectares and we only use around 600-700 images for that size site.  In fact we take twice as many images, but we usually remove every second one. Using these we generate relatively accurate LAS, DMS,DTS geotiff, orthomosaic files and so on.  For this type of work we don’t bother generating a mesh file unless we want to add it to our Sketchfab and if customers want that. Most don’t. (Here is one site about 290 hectares and took 620 photos)

Of course we fly at around 140-200m high, so the number of images will be less than if you fly lower. If you are flying very low, say 50m, then certainly you will need more images, and for the same site I calculate its about 3000-5000 images for 130h . Even so I still cant see how doing this will have any significant advantage if its for either GIS mapping of volume surveys? usually 80-120m is fine.

Be aware that having many more images does not necessarily result in better quality or higher resolution data sets either. In fact it can work to be the opposite. There has to be a fair space between each photo for the software to triangulate the accuracy of the cloud point positions accurately in the 3D space. In particular it can affect the Z axis accuracy having to many photos and it can create a lot of noise. Imagine having two photos nearly placed on top of one an other with only a few cm separation.  The horizontal length is to small any slight variance makes it difficult to calculate the point Z axis position. The further apart the photos the more accurate the the points will be. 

If you are not doing mapping and doing large city scape models ,I still cant see how this would need 30000 images.

Perhaps advise what you are trying to achieve, so people can advise you better.

 

@John Campen - thanks John for the very detailed info!

I am trying to do a large 3D city scape model. Here’s the current Sketchfab link: https://skfb.ly/6ySHW

Note the pixelation on the buildings and smokestacks.

The only purpose for what I’m doing is generating a 3D model for display on Sketchfab, similar to the model you shared, only including buildings that must be displayed with a decent resolution. No mapping, etc.

For image acquisition I was flying a DJI Phantom 4 Pro at 210ft altitude with 70-80% overlap and 65° camera angle.

When I extract a sample of images for a smaller area the model looks fine, leading me to believe the problem lies with Pix4D processing limitations. Here’s an example of a smaller area, using the same images from the large model above: https://skfb.ly/6zDMP

It could also be an issue with my processing rig, but I tried to build that using recommendations from the Pix4D moderators and community. Computer specs are here:

  • CPU: Intel Core i7-6950X (10 cores)(Overclocked to 4.1GHz)
  • GPU: MSI GeForce GTX 1080 Ti Founder’s Edition
  • Motherboard: Gigabyte GA-X99 Ultra Gaming
  • RAM: 128 GB (8x16GB) G.Skill DDR4 Trident Z 3600Mhz PC4-28800 CL17 (17-18-18-38)
  • SSD: Samsung 850 EVO 2TB 2.5-Inch SATA III
  • HDD: HDST Deskstar NAS 4TB 7200rpm SATA 6Gb/s, 64MB cache, 3.5"
  • Liquid Cooling: Corsair Hydro H60 Liquid CPU Cooler
  • Case: Thermaltake View 27 Gull-Wing Window ATX Mid-Tower Chassis
  • Fans: Termaltake Riing 12 RGB 120mm x3
  • Power Supply: 750W Thermaltake SMART Series Power Supply 80+ Bronze Certified, Active PFC, SLI

@John, you don’t need spacing for high accuracy, at least in a 3D model. I run 95% overlap and get 3rd party validated accuracy below 1mm on a cell tower. Maybe mapping is different but I used the same technique on a golf green and got near 2mm accuracy.

@Brad, your power supply is too small and should be 1000W for that system…just to give some cushion. You should be able to run a 200 gigapixel project on that machine, which is 10,000 pictures. I wouldn’t suggest a single project any bigger.

@Adam,  we supply surveys for geological exploration and quarry surveys. so 1mm-2mm is way over kill for our purpose.

I have seen this same discussion on the forms over the years that increasing the overlap we had assumed it would give a lot better resolution. From what recall the general consensus was 85% was about the limit most had agreed was the limit before it stared to degrade the data sets rather than improve it. Although I c ant recall now if most w er dong models or mapping.

The question I have in your application and experience , does the difference from say 85% to 95% really making the difference?  The reason I ask, I had tried 90% overlap and it was worse than going back to using 75% for the same data set. There was so much noise and scattered points allover the data set. Also how many shots would your type of flight plan generate for 300 acres?

@Brad, even at that height and overlap ,I cant see how you are generating 30,000 shots?

See below flight plan for my VTOL plane to cover 300 acres and that is using the same overlap, and altitude, and this flight plan generates 3035 shots. If you do a grid for this size area then its 6070 shots, still no where near 30,000.

I think you need to look at the capture plan you are using, there is definitely something wrong taking 30,000 shots for this size site. Also if it is just to upload to Sketchfab seem a lot of work for a the limitations of the file size Sketchfab allows.

 

@John, that is the difference from model to mapping. 85% is a failure for high accuracy models and even 90% is iffy. My point is not to say your mapping is wrong but simply other applications like 3D modeling work with different standards. The verticality of city buildings make the situation more like a 3D model than a 3D map in my opinion.

Fair comment Adam, and thanks for your insight.  How about side overlap, would you also uses 95%?   I see that if I sent both forward and side overlap in my VTOL planner I get around 26000 shots for 300 acres and scanned at 62 meters (210ft) If thats the case, I agree you earlier comment to break the project up into segments to limit the size of shots not more than around 10k. Even that I think is a lot.

I assume for transmission towers you use an orbit at a few levels, but for a city scrape like Brad is trying to scan I assume the grid flight is best and with the camera set at an angle around 70-80deg. The problem with the P4Dcapture app is it gives no info about how many shots it captures for a given flight plan. I also noticed its maximum overlap is set to 90%? Even for my DJIs P4Ps, I don’t us the capture app and if your using 95% overlap assume your not either? But also in view of your comments , interesting P4D assumes 90% should be the maximum?