I just spent an hour writing a reply to another thread that became locked and lost my long winded reply.  Essentially, it was essentially very similar to Antigoni’s reply.  However, CPUs are usually benchmarked for gaming, tested by gamers, and voted on by gamers resulting in daily benchmark variations.  Intel CPUs are built for abusive people such as myself who water block and overclock.  However, even without overclocking, CPUs generate heat and heat is the enemy.  Intel CPUs are purposely overbuilt to handle the heat while AMDs…well… they’re built just well enough to handle an amount just prior to their suicide rate. Â
The AMD Ryzen 1800X is only capable of handling 95W of power before it begins to degrade due to thermal instability. Â
The Intel Xeon E5-2673 v3 is capable of handling 110W of power with 12 cores (as opposed to Ryzen’s 8) at a 4gHz overclock
Here’s where quality comes to play.  Although the Ryzen can Turboboost to 4gHz in the specifications, they fail to tell you that only a maximum of 2 cores are being boosted in single second intervals at any given time while Intel can boost all 12 at full throttle with up to 20 second intervals.  Add an inexpensive Corsair self contained liquid cooler and you can run all 12 at 4gHz full time.
The Ryzen 1800 benchmarks at #54 with an advertised clock of 3.6gHz and the Xeon benchmarks at #32 with an advertised clock of 2.4gHz and both are priced similarly.
But why would a 3.6gHz processor benchmark 50% lower than a 2.4gHz?  The 3.6gHz is never equalized between all the cores. You’re only going to get two cores running at 3.6gHz with the other four running 50% or 1.8gHz.  With the Xeon out of the box stock, all 12 cores are running 2.4gHz and boost as necessary while running only two cores at idle.  Add the Corsair cooler and you’ll get 4gHz on demand with all 12 cores.
I’ve been building my own computers since 1994 when aftermarket parts became readily available.  I have always built mine for optimized gaming.  I purchased one AMD processor and cooked it before I put the case together.
Here are examples of four computers I own that I tested PIX4D with on a 6 1/2 acre earthworks project with 880 photos total divided at 6 different angles:
2011 Model Lenovo Twist i7-3537, 8GB, 512GB Samsung 850:  Removing all of PIX4D’s enhancement capabilities with the lowest possible qualitative result required 34 hours to complete.
2017 Model Microsoft Surface Pro i7-6600, 16GB, 1TB SSD: Â Same processing scenario as the Lenovo required 26 hours to complete.
2017 Model MSI GT83VR i7-6820 w/ Dual SLI GTX1070 16GB GDDR5, 64GB DDR4-2400, 3x Samsung EVO 960 M.2, 1x Samsung 850 Pro 1TB required less than two hours to complete the process as described above. Â
Brought everything up to “default” as indicated by PIX4D and achieved full processing in 6 hours. Â
Bumped up photo size to “Original” and 100,000 Keypoints, Point Cloud Image Scale to "Original Image Size, Optimal Point Density, Medium Mesh Resolution, 9X9 Point Cloud Densification, all boxes checked in DSM / Ortho with Maximum Pixel downsampling and achieved full processing in less than 18 hours.
I have not attempted to do a maximum output quality as I use this in the field.  I’ll perform one this weekend and report back.
Now, for the computer I built to heat half my home last year…
i7-6900K water blocked and clocked to 4.9gHz, 128GB DDR4-2400, dual SLI GTX1080Ti 20GB GDDR5 with factory water blocks, Corsair AX1500i, and more SSD capacity than the NSA.
Â
1: Original Image Size (Double Image Size actually distorts high resolution photos)
  1,000,000 Keypoints (Overkill and barely made a difference from 100,000)
2: Point Cloud Image Scale: OriginalÂ
  3D Mesh: Octree depth set to 12, custom resolution set to 131072X131072, qualitative set to sensitive, and densification set to 9X9, sample density divider set to 5
3: GSD set to 1cm per pixel, noise filtering: On, Surface smoothing: Sharp, GeoTIFF Raster: Triangulation
I did my best to max out every setting
Before I give the result, Windows defaults the priority of the program to LOW Â At this priority, full processing took a little over 30 hours.
WARNING: Â If you want to notch up the program priority, perform the following steps:
Above Normal: Â Click to process, then remove the batteries from your keyboard and mouse, turn off your monitor and only turn it on to check the progress bar. Â 27 hours
High: Â Turn off every non essential windows process, give the resources an additional 4GB of RAM, take the wife, kids, dog, cat, mouse, and keyboard on an overnight trip. Â 22 hours
Realtime: Â Turn off every Windows process until it crashes Windows, then back off one process while keeping all your Nvidia processes. Â Give the resources the same 4GB RAM, turn off all your anti-virus, screen saver, energy saver, EVERYTHING. Â Give the wife money to take the kids, dog, cat, mouse, and keyboard on a day trip. Â Clear a few shelves off out of the refrigerator, crank it up to 9, and insert computer tower leaving only the power and display cables to extend outside. Â Only plug in the monitor to turn it on to check progress. Â Place a cooler full of ice and 12 bottles of beer to the right of your easy chair and a fire extinguisher to the left. Â Watch Netflix on IPad, drink beer, and enjoy the silence. Â 17 hours
Not being married with no young children or animals, I have a refrigerator dedicated to my home computer.  It’s actually a wine cooler with a glass door, don’t judge. However, if the computer is disturbed at all with the priority set anywhere above normal, you will either crash the program or the operating system.  Even I switch off my mouse and keyboard right after I start processing.
My next home build will include two Nvidia Quaddro P6000 cards which will be able to compute X4 the CUDA cores than the GTX series with CUDA SDK.
Â
Â