With the announcement of the PlayStation 5 and the Xbox Series X specs, the console-wars are not just heating up, but tipping over into extreme levels of insanity. While Microsoft and Sony are taking similar paths with similar technology, there is a lot of comparison to previous consoles as well.
With the next-generation consoles, Microsoft is taking an approach of stable output at high clock speeds to provide a consistent experience for developers. In theory, this should help to stabilize performance and possibly make it easy for developers but there are downsides to this model.
For example, in highly dynamic environments where explosions are happening, there is no “boost” to help keep the frames stable. Effectively, developers have to optimize their entire game to defined performance benchmark and if they attempt to render a scene that taxes the system, they have to scale down the scene rather than push the clock speed higher to maintain stability.
This really isn’t that big of an issue and it’s a minor tradeoff to having higher sustained performance than boost performance. The PlayStation has the ability to push 10.3 TFlops of output but will typically operate in a state that is around 9.2 TFlops; the Xbox will operate consistently at 12 TFlops with no deviation in performance.
Put another way, PlayStation developers should be targeting games with the performance characteristics of a 9.2TFlop console and if a scene needs some extra muscle to keep frames stable, the console can briefly kick up the horsepower. Xbox developers will be targeting an environment with 12Tflops of performance and in the event that a scene drags frames down, they will need to either optimize the loading of assets or scale other features as there is no boost performance available.
“Boost” is a common feature in the CPU world, Intel has been using ‘burst’ that can overclock a CPU for increase performance and is a proven tactic for edging out a little bit more compute when needed. But, it’s not sustainable for running for lengthy periods of time, otherwise, that would be the default clock speed.
So what does all this mean? It’s important to understand the basics of the approach to each console but at the end of the day, for the user, none of this is all that relevant. It’s great for saying one is better than the other, but there are many variables into what makes a console great.
One of the other talking points recently is the performance of the unannounced LockHart console that is targeting lower performance and also a lower price; TFlops comparisons don’t always make much sense. The console is expected to come in around the 4-5 TFlop range, well below that of the series X and the PS5 and it’s even less than the Xbox One X.
If the Xbox One X has 6 TFlops of power and the Lockhart (or possibly know as the series S) only has 4, doesn’t that make its performance worse than an existing devices? In this scenario, it’s not logical to compare apples to apples here.
Why? For starters, the Lockhart console will have at its disposal a bunch of new features that significantly optimize its output when compared to the One X’s nearly decade-old architecture. When Microsoft does announce the hardware, and especially if it supports all the features of DX12U, then the console will benefit from hardware and software improvements that should make it provide stable framerates that the One X could struggle to output.
Microsoft even points out that by enabling Sampler Feedback Streaming (SFS) ” because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance. This is one aspect, of one feature, that will be included in next-gen Xbox consoles and its enabling 2/3x performance from storage; the Lockhart console will be significantly more optimized for performance than the One X.
What I am saying is that you can’t compare the TFlop output of next-gen consoles to existing hardware as it’s not the complete picture. For next-gen consoles, it’s a fair comparison but when Microsoft does finally announce Lockhart, don’t lock-in on the raw performance as it’s not the complete story.
<p>The TFs tell us that Lockhart would targeting a just-above 1080p to maybe 1440p experience. At that level, it's basically only viable for true next-gen games that rely on the much quicker Zen 2 CPU cores and storage subsystem for their design and cannot be back-ported to the awful Jaguar-based SOCs of the current gen.</p><p><br></p><p>I'm not sure saving roughly 100mm² on the first implementation of the SOCs (S vs X) by cutting down graphics hardware by ~2/3rds is a useful trade-off considering the extra resources required to carry a second, significantly slower model forward through the entire console generation. Bad customer experience + higher load on devs vs. being able to market a lower price for an inferior product.</p><p><br></p><p>If it were my decision, I wouldn't bring Lockhart to market.</p>
<blockquote><em><a href="#534052">In reply to RM:</a></em></blockquote><p>I was not comparing Lockhart to X, but rather Series X, which is around 2.5 to 3 times as powerful in graphics terms. (same micro-architecture, so the comparison based on TFs is appropriate)</p>
<p>The TFlops that will matter for at least a year after the launch of these new consoles, probably more, will be those of the Xbox One S and PS4. </p><p><br></p><p>Why….because they want to sell games and lots of them. So selling to the 150+ million Xbox One/PS4 users will take priority over the million or less of the new consoles this holiday season.</p><p><br></p><p>Add in the economic impact of the events of today and I bet the new console sales are going to be lower than expected. </p>