Moore's Law Stutters As Intel Switches From 2-Step To 3-Step Chip Cycle

Moore's Law Stutters as Intel Switches From 2-Step to 3-Step Chip Cycle

Intel has announced that it's moving away from its current "tick-tock" chip production cycle and instead shifting to a three-step development process that will "lengthen the amount of time [available to] utilise... process technologies."

Image by Sh4rp_i

For years now, Intel has run its chip business on a 'tick-tock' basis: First it develops a new manufacturing technique in one product cycle (tick!), then it upgrades its microprocessors in the next (tock!).

But recently it's been struggling to keep pace. The last advance in Intel's chips was to move to a design created using 14 nanometre transistors aboard its Broadwell processors, which given the tick-tock cycle we'd expect to be miniaturized in 2016. But last year Intel was forced to announce that its 2016 chip line-up, called Kaby Lake, would continue to use 14 nanometre processes. Instead, the next shrinkage would arrive in the second half of 2017, when Intel said it would shift to transistors that measure just 10 nanometres in its Cannonlake chips.

Now, in an annual report filing, Intel has officially announced that it's moving away from the tick-tock timing. Instead, it will run on a three-step development process that it refers to as "Process-Architecture-Optimization." From the filing:

As part of our R&D efforts, we plan to introduce a new Intel Core microarchitecture for desktops, notebooks (including Ultrabook devices and 2 in 1 systems), and Intel Xeon processors on a regular cadence. We expect to lengthen the amount of time we will utilise our 14nm and our next generation 10nm process technologies, further optimising our products and process technologies while meeting the yearly market cadence for product introductions.

While it doesn't explicitly refer to timescales, the news suggests that Moore's Law — which states that the number of transistors on an integrated circuit doubles every two years — is stuttering at Intel. The size of the transistor, of course, dictates the number you can squeeze onto a chip. Indeed, last summer Intel's CEO Brian Krzanich mused that "the last two technology transitions have signalled that our cadence today is closer to 2.5 years than two."

All of this is, of course, the result of the original "tick" becoming increasingly difficult to bring about: The limit of what can be done with conventional silicon is fast being approached. IBM has announced that it can create 7-nanometre transistors, but it's a new technique using silicon-germanium in the manufacturing process rather than pure silicon, and at any rate the process is a way off being fully commericialized.

The truth is that we may just have to start waiting a little longer for faster silicon, for now at least.

[Intel via The Motely Fool via Anandtech]



    Are we also getting to the point of do we need faster cpu's? Years past a cpu upgrade would have massive changes to system performance. The cpu was commonly a bottle neck in days past. Used to be in forums about don't get this graphics card cause your cpu will be a bottleneck. Haven't heard things like that for a while. On my last PC build I think the change from sata2 to sata3 ssd made the most noticeable changes.

      Yeah, this. The biggest & best improvement I ever made in recent years was an SSD.

      I'm still running Sandy Bridge, which is socket 1155 with 8GB ram from 2011/2012 & I have a GTX 760 which runs fine and nothing bottlenecks. Windows boots to usable within 30 seconds, everything still runs like a dream. Saying that I am thinking about making the jump to Skylake & whatever new GPU architecture comes out this year.

      The answer is probably not. The clock speed per se is becoming less and less important.

      Mainly it's because of increased functionality of processors offloading tasks on the "main" core. For example, mobile versions are now SOCs with different parts performing different roles. And there are a whole load of other technologies such as a multi-core threading and virtualisation that influence performance too.

      But, as you mentioned, there are other optimisations that can increase the perceived "speed" of a system far more noticeably than CPU. Especially for gaming where the GPU and RAM matter more.

        You have really lost me here. Main core?

        "with different parts performing different roles?
        These other roles you are talking about are handled by the system Bus and associated controllers. The bus systems don't use CPU time, changes in BUS system and removal of separate north bridges have sped up bus speeds significantly as well as using more unified transport systems and relocation of memory controllers, again this has nothing to do with CPU speed. But overall performance is increased because your devices are talking to each other faster and not being throttled at controllers or interface interpreters.

        Also these controllers like in newer Intel chips where the FSB is removed and the northbridge is on chip, is something completely different, it still have nothing to do with the CPU speed, its just moving a controller onto the same chip reducing interconnects etc, gaining overall system performance.

        "The clock speed per se is becoming less and less important" almost irrelevant, it's all about the architecture these days interesting you brought it up, I am not sure why?

        SOC's no they are not, it's like hoverboards. I couldn't be other describing it so..

          Clock speeds are fairly irrelevant anyway, its all about the operations per clock in each successive generation.

    I imagine Moores law will start to fall behind for a while now but it will more than catch up when we move to quantum chips whenever that may be.

Join the discussion!

Trending Stories Right Now