• Main Menu
  • Why Stagnation of CPU Performance is Not a Big Deal


    Microchip in handThere are various reports talking about the difficulties of scaling current CPU technologies upwards to meet the expected major improvements in speed. In a nutshell, the ongoing trends of adding more cores, and integrating various non-CPU system functions with the CPU are only going to get us so far, but they will fairly soon reach their own plateaus.

    At that point we’re looking at a number of alternatives, from nanotubes to graphene, which will be difficult to implement any time soon as a true replacement or upgrade over the existing paradigm. Given this situation it seems quite possible that there will be a period of stagnation. The Moore’s law will no longer apply, and gains in processing power will be negligible to non-existant for at least a few years.

    But is this really such a big deal? It seems like for some enthusiasts this represents an almost nightmarish scenario. After decades of wonderful continuous and even accelerating improvement it would all just stop. That just seems hard to imagine, or hard to accept.

    But, there are a few good reasons why this actually isn’t such a big deal, and why this doesn’t necessarily signal a cessation of technological progress in general, including the information technology specifically.

    1. Most people are past the threshold

    Back in the nineties getting a computer with the latest processor really was a big deal. You could really feel the difference! And even when you had the latest it was still kinda slow so we still kept looking forward to the latest. More gigahertz! More speed!

    But today things are quite a bit different. For the vast majority of people’s computer uses, current and even few years old processors are more than enough, and upgrading to the latest provides fairly marginal benefits. In other words, for a lot of us modern CPUs are well past the threshold of what we find enough.

    This means that whatever improvements the latest and greatest brings increasingly serves mainly the niches, the specialists, the data centers, those whose thresholds reach out into the skies. Those aren’t most people.

    2. Other components matter more

    Few smartphone users know which CPU their smartphone has within it. I bet many would be surprised to know that their smartphone actually has a dual core or a quad core processor built in, if they even knew why that is such a big deal. But talk about battery life, and I bet there would be a lot more interest and excitement.

    Same goes for things like built in internal storage, internet connection speeds, and so on, albeit these are far closer to the kind of threshold I’ve talked about above than the issue of battery life.

    In other words, if we could achieve a revolution in battery technology that would enable smartphones and tablets to last an entire week instead of barely a day this would to us be a far greater deal than if our smartphones doubled the processing power of their CPUs.

    Similarly, if every new PC came with an SSD instead of a traditional hard drive, and those SSDs had comparable capacities, this would in most cases offer a much more discernable benefit than the latest and greatest CPU. RAM too is another example of a resource that probably matters to most people more than CPU power.

    In other words, there are certan components of our digital devices which kind of have to catch up to the performance that our CPUs offer for their potential to be truly maximised. But as of now we have these awesome processors which are kind of heckled by other components. I mean, when we talk about actually deliberately reducing the speed of our processors so they spend less battery power, and when our CPUs have to wait for the slow hard drive to read data, it is clear that our priority should be these components, not CPUs, if we want to maximise overall system performance.

    By the time these other components catch up, it is perfectly fine for CPUs to stagnate for a while, before super-advanced new paradigms are developed.

    3. Maximising the utility of existing levels of processing power

    This kind of ties into the last point, but extends to a far bigger picture. When we focus mainly on making that holy Central Processing Unit ever more powerful with every generation we as observers, users, and enthusiasts might forget the huge benefits of maximising the utility of what we already have!

    Some examples of this include better more efficient and more elegant software, parallel computing for everyone, smart networked systems, putting these powerful processors into more appliances making them smart, and so on.

    In other words, even if the maximum processing power of a CPU doesn’t increase in an entire decade, their utility, the way they are used, could continue improving tremendously. This would give us the impression of continuing progress even though CPUs are generally no longer becoming faster and faster. The ongoing process of networking our world and making everything smart would continue unabated.

    4. Cloud computing and web apps offload some of the local processing

    Let’s not forget the ongoing trends in cloud computing and web based applications which can do more and more of what could only be possible with locally hosted applications before. Chromebooks are actually doing better than I expected, for instance, despite the fact that they are basically just web browsers in a box. While the latest Chromebook Pixel is pretty beefy, Chromebooks typically don’t require all that much processing power just to run web apps.

    And web apps can take advantage of server side processing for many of the heavy duty tasks rather than local processing, meaning that the local computer doesn’t have to be all that powerful anymore. Even high end gaming without high end local hardware becomes possible this way, albeit this is yet to gain much traction in the market (OnLive and Gaikai notwithstanding).

    Of course, offloading processing to servers still means that those servers could benefit from general increases in available computing power, but cloud computing providers are much better equipped to maximise efficiency than the typical consumer. After all, the word is that the likes of Facebook, Amazon, and Google are some of the biggest customers of Intel; ordering custom designed chips for their data centers so they better fit their requirements.

    For the consumer, however, being able to do more with a machine that isn’t all that much more powerful by relying only on the fast internet connection makes the importance of latest and greatest in CPU performance even less significant.

    5. Stagnation wont last forever, and the future jump will make up for the dip

    Even if stagnation lasts for a decade or two, the sheer potential opened up by completely new paradigms such as quantum computing could largely make up for the missing progress of the stagnation period. After a while the stagnation period would seem like just a blip in the overall continuing trend of accelerating progress. I sure know that a popular transhumanist, singularitarian, and Google employee Raymond Kurzweil will agree with this way of looking at things.

    Conclusion

    Stagnation in available processing power per chip may happen, but this isn’t necessarily a bad thing. It could in fact be positive for the overall evolution of information technology as it may help shift our focus to areas where we need much more improvements, areas which in fact prevent us from taking fuller advantage of the processing power we already do have. This dynamic pressures the CPU manufacturers themselves as they struggle to innovate in a way that maximises the value of their CPUs even when they are unable to churn out much more computing power out of them.

    The whole MtM (More than Moore) concept seems to already demonstrate this. It’s basically about working on optimising the performance of the entire system, including other components, and the way they interact with the CPU rather than focusing solely on the power of the CPU itself.

    Meanwhile most consumers have less and less reason to care about the best in CPU performance, as other issues become more prominent, while currently available performance passes over the threshold of what is more than enough.

    Image courtesy of suphakit73 / FreeDigitalPhotos.net

    Leave a Reply to yag Cancel reply

    Your email address will not be published. Required fields are marked *

    2 comments
    1. IL PC user

      28 July, 2018 at 12:03 pm

      Cheaper PC’s will have a shorter planned obsolescence so people need to realize this and plan to buy hardware that will last this length of time. These days you want a notebook to last 5 plus years, you will have to buy better hardware and skip the cheap base model stuff. Gamer’s especially will want to buy better quality hardware to stave off the obsolete hardware that will come sooner rather then later if you do not spend enough on hardware.

      Reply
    2. yag

      28 September, 2017 at 3:26 pm

      “the sheer potential opened up by completely new paradigms such as quantum computing(…)”
      Quantum computers are useless except for cryptanalysis. The author of this article knows nothing.

      Reply
    Editorials
    172 queries in 0.608 seconds.