When do TFlops Matter?

With the announcement of the PlayStation 5 and the Xbox Series X specs, the console-wars are not just heating up, but tipping over into extreme levels of insanity. While Microsoft and Sony are taking similar paths with similar technology, there is a lot of comparison to previous consoles as well.

With the next-generation consoles, Microsoft is taking an approach of stable output at high clock speeds to provide a consistent experience for developers. In theory, this should help to stabilize performance and possibly make it easy for developers but there are downsides to this model.

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday — and get free copies of Paul Thurrott's Windows 11 and Windows 10 Field Guides (normally $9.99) as a special welcome gift!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

For example, in highly dynamic environments where explosions are happening, there is no “boost” to help keep the frames stable. Effectively, developers have to optimize their entire game to defined performance benchmark and if they attempt to render a scene that taxes the system, they have to scale down the scene rather than push the clock speed higher to maintain stability.

This really isn’t that big of an issue and it’s a minor tradeoff to having higher sustained performance than boost performance. The PlayStation has the ability to push 10.3 TFlops of output but will typically operate in a state that is around 9.2 TFlops; the Xbox will operate consistently at 12 TFlops with no deviation in performance.

Put another way, PlayStation developers should be targeting games with the performance characteristics of a 9.2TFlop console and if a scene needs some extra muscle to keep frames stable, the console can briefly kick up the horsepower. Xbox developers will be targeting an environment with 12Tflops of performance and in the event that a scene drags frames down, they will need to either optimize the loading of assets or scale other features as there is no boost performance available.

“Boost” is a common feature in the CPU world, Intel has been using ‘burst’ that can overclock a CPU for increase performance and is a proven tactic for edging out a little bit more compute when needed. But, it’s not sustainable for running for lengthy periods of time, otherwise, that would be the default clock speed.

So what does all this mean? It’s important to understand the basics of the approach to each console but at the end of the day, for the user, none of this is all that relevant. It’s great for saying one is better than the other, but there are many variables into what makes a console great.

One of the other talking points recently is the performance of the unannounced LockHart console that is targeting lower performance and also a lower price; TFlops comparisons don’t always make much sense. The console is expected to come in around the 4-5 TFlop range, well below that of the series X and the PS5 and it’s even less than the Xbox One X.

If the Xbox One X has 6 TFlops of power and the Lockhart (or possibly know as the series S) only has 4, doesn’t that make its performance worse than an existing devices? In this scenario, it’s not logical to compare apples to apples here.

Why? For starters, the Lockhart console will have at its disposal a bunch of new features that significantly optimize its output when compared to the One X’s nearly decade-old architecture. When Microsoft does announce the hardware, and especially if it supports all the features of DX12U, then the console will benefit from hardware and software improvements that should make it provide stable framerates that the One X could struggle to output.

Microsoft even points out that by enabling Sampler Feedback Streaming (SFS) ” because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance. This is one aspect, of one feature, that will be included in next-gen Xbox consoles and its enabling 2/3x performance from storage; the Lockhart console will be significantly more optimized for performance than the One X.

What I am saying is that you can’t compare the TFlop output of next-gen consoles to existing hardware as it’s not the complete picture. For next-gen consoles, it’s a fair comparison but when Microsoft does finally announce Lockhart, don’t lock-in on the raw performance as it’s not the complete story.

 

Tagged with

Share post

Please check our Community Guidelines before commenting

Conversation 10 comments

  • Jonas Barkå

    26 March, 2020 - 11:07 am

    <p>Do we know that is will <span style="color: rgb(0, 0, 0);">will typically operate in a state that is around 9.2? The presentation only stated something like "close to 10.3 most of the time".</span></p><p><br></p><p>Do you have additional info?</p>

    • anthont

      27 March, 2020 - 4:04 pm

      <blockquote><em><a href="#534046">In reply to Havoc:</a></em></blockquote><p>Agreed.</p><p>If you watch Cerny's talk again, he stated that power draw is fixed in PS5, to prevent having to throttle based on unknown thermals.</p><p>Fixed power draw equates to fixed thermals, meaning the boost clock can stay at peak in most situations, unless the operation doesnt require it.</p><p><br></p><p>A lower CU count with higher GPU clocks may indeed lead to a performance deficit, but may also actually eradicate overhead in CU utilisation, we'll just have to see how this plays out in real situations.</p><p>But with IO CoProcessors, Tempest SPU, and high bandwidth SSD as RAM?This will be a very interesting time.</p>

  • proesterchen

    26 March, 2020 - 11:17 am

    <p>The TFs tell us that Lockhart would targeting a just-above 1080p to maybe 1440p experience. At that level, it's basically only viable for true next-gen games that rely on the much quicker Zen 2 CPU cores and storage subsystem for their design and cannot be back-ported to the awful Jaguar-based SOCs of the current gen.</p><p><br></p><p>I'm not sure saving roughly 100mm² on the first implementation of the SOCs (S vs X) by cutting down graphics hardware by ~2/3rds is a useful trade-off considering the extra resources required to carry a second, significantly slower model forward through the entire console generation. Bad customer experience + higher load on devs vs. being able to market a lower price for an inferior product.</p><p><br></p><p>If it were my decision, I wouldn't bring Lockhart to market.</p>

    • rm

      26 March, 2020 - 11:45 am

      <blockquote><em><a href="#534049">In reply to proesterchen:</a></em></blockquote><p>Except, as the article states, there are other improvements that make 1 TF on the new consoles push more graphics than 1 TF on the last generation. So, 1 TF on Xbox One X gives you less graphics processing than 1 TF on Xbox Series X.</p>

      • proesterchen

        26 March, 2020 - 11:50 am

        <blockquote><em><a href="#534052">In reply to RM:</a></em></blockquote><p>I was not comparing Lockhart to X, but rather Series X, which is around 2.5 to 3 times as powerful in graphics terms. (same micro-architecture, so the comparison based on TFs is appropriate)</p>

    • evox81

      Premium Member
      26 March, 2020 - 3:15 pm

      <blockquote><em><a href="#534049">In reply to proesterchen:</a></em></blockquote><p>Considering game developers are (often, but not always) already going to be doing the work of optimizing for hardware with differing levels of performance (the entire Xbox One line on the console side, and various graphics cards on the PC side) adding a "middle" option doesn't really make any more work for them. Considering they're already doing this (quite well) with the One, One S and One X, and PC devs (which Xbox devs are basically turning in to) have been doing it for decades, this isn't really making their jobs any more difficult.</p><p><br></p><p>Lockhart, and having options in general, is good for consumers. I'll take it.</p>

  • Sykeward

    26 March, 2020 - 12:44 pm

    <p>I agree that you that the raw figures of the One X vs Lockhart don't necessarily show actually performance, but I have to call you out just a little, sir. <span style="color: rgb(0, 0, 0);">You talk here about Microsoft-specific features that make Lockhart more efficient and will </span>help Lockhart substantially bridge the performance gap. BUT! You've also claimed that the PS5's 2x faster storage vs Series X probably won't make much difference because it's unlikely devs will take advantage of vendor-specific features in otherwise-similar hardware (as you did in a recent podcast). I know that you and Paul are pretty firmly in the Xbox camp, but I don't think you can have this both ways.</p><p><br></p><p>There's precedence for all this, though. If we look at the Xbox One and original PS4 specs, there was a similar performance gulf between the two basically-the-same hardware platforms</p><p><br></p><p>Xbox One: GPU 1.23 TFLOPS w/ 768 cores</p><p>PS4: 1.84 TFLOPS w/ 1152 cores</p><p><br></p><p>The PS4 had way faster RAM, too! How much difference did it make in real world performance? Almost zero.</p>

  • madthinus

    Premium Member
    27 March, 2020 - 8:23 am

    <p>The flipside of all of this argument is also overhead on the OS en GPU driver. If the Xbox have more overhead that the Sony, the TFlops numbers is meaning less. How much you can tap out of the hardware depends greatly on the OS and Driver. And those get better with time as they make progress at addressing bottlenecks.</p>

  • remc86007

    27 March, 2020 - 4:12 pm

    <p>I think Brad is wrong about how this will work. The PS5 is not going to operate at some lower clock rate and then "boost" up to higher clocks temporarily when a high demand event occurs like an explosion. Instead, the PS5 will operate exactly how laptop APUs and most modern PC GPUs operate: within a TDP envelope. The CPU and GPU on the PS5 will always be operating as fast as they can without going over the max TDP. Brad makes it seem like this is easier for a developer to deal with than stable clocks, but I assure you it is not. It is actually much more difficult to deal with because instead of just keeping the CPU and GPU frame times under the target based on each parts' fixed compute capabilities, you have anticipate what the CPU will be doing while the GPU is trying to render the frame and determine if the CPU load is one that will draw extra power (like extensive AVX instructions) causing the GPU power to necessarily decrease to accommodate the CPUs need for power headroom. It really is a nightmare to deal with and will likely result in game developers aiming for lower average frame times just to simplify development. The only situation in which the PS5 design is useful is maximizing CPU or GPU capabilities when the other part is not under stress within a lower thermal and power envelope; unfortunately, most times that the GPU is super stressed, the CPU is too therefore negating this advantage.</p>

  • Stooks

    27 March, 2020 - 5:08 pm

    <p>The TFlops that will matter for at least a year after the launch of these new consoles, probably more, will be those of the Xbox One S and PS4. </p><p><br></p><p>Why….because they want to sell games and lots of them. So selling to the 150+ million Xbox One/PS4 users will take priority over the million or less of the new consoles this holiday season.</p><p><br></p><p>Add in the economic impact of the events of today and I bet the new console sales are going to be lower than expected. </p>

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Thurrott © 2024 Thurrott LLC