The One Way Your Laptop Is Actually Slower Than A 30-Year-Old Apple IIe

Have you ever had that nagging sensation that your computer was slower than it used to be? Or that your brand new laptop seemed much more sluggish than an old tower PC you once had? Dan Luu, a computer engineer who has previously worked at Google and Microsoft, had the same sensation, so he did what the rest of us would not: He decided to test a whole slew of computational devices ranging from desktops built in 1977 to computers and tablets built this year.

And he learned that that nagging sensation was spot on -- over the last 30 years, computers have actually gotten slower in one particular way.

Image: William Warby/Flickr

Not computationally speaking, of course. Modern computers are capable of complex calculations that would be impossible for the earliest processors of the personal computing age. The Apple IIe, which ended up being the "fastest" desktop/laptop computer Luu tested, is capable of performing just 0.43 million instructions per second (MIPS) with its MOS 6502 processor.

The Intel i7-7700k, found in the most powerful computer Luu tested, is capable of over 27,000 MIPS.

But Luu wasn't testing how fast a computer processes complex data sets. Luu was interested in testing how the responsiveness of computers to human interaction had changed over the last three decades, and in that case, the Apple IIe is significantly faster than any modern computer.

In a post on his website, Luu explains that he tested the time it took for him to press a key input on a keyboard to when that input appeared on the display in a terminal window. He measured using two cameras, one capable of shooting 240 frames per second, and another capable of shooting 1000 frames per second. While not as precise as if he'd had a machine doing the key input, his set up was still precise enough to give him a powerful understanding of just how slow newer computers have gotten.

It is how he learned it takes 30 milliseconds for a 34-year-old Apple IIe to register an input on its accompanying display, and 200ms for a brand new PowerSpec g405 desktop with a 7th-generation Intel i7 processor inside to register an input.

Because Luu is a wonderful nerd, he got pretty granular with his testing. So he tested computers using multiple operating systems to see which introduces the most lag, and he tested some systems using displays with multiple refresh rates to see how refresh rate alters the lag. He found that those two variables pretty dramatically altered results.

With refresh rate, there was a consistent measurable difference, across computers. In his piece, he says, "at 24Hz each frame takes 41.67ms and at 165Hz each frame takes 6.061ms." So a computer with a custom Haswell-e processor took 140ms on a 24Hz display, but just 50ms on a 165Hz display. The g405 with Linux on a standard 30Hz display took 170ms, but just 90ms on a 60Hz display.

The effect of the operating system was fairly pronounced, too. He folds operating systems into a broader problem he calls "complexities." Modern operating systems, displays, and even keyboards have a lot more going on between the input occurring in a device and appearing on screen.

In the case of operating systems, it means newer OSes have more steps they have to go through in order to register an input. One example Luu provides is iOS -- a key press on an iPad might need to take 11 steps just to register. A keypress on an Apple IIe running Apple DOS 3.3 has considerably fewer steps.

But the problem isn't limited to iOS. In fact, as Luu's numbers prove, they're actually more culpable in operating systems that have to support a wider range of devices than what iOS has to support. This is why the Android devices Luu tested had more lag than the iOS devices, and why barebones operating systems like Chrome OS, Linux, and even the now ancient MacOS 9 exhibited less lag on the same machine when compared to Windows or MacOS X.

The important thing Luu notes is that complexity of the modern computer system might increase lag, but it isn't necessarily a completely awful thing either. As one example, Luu points to the complexity of a modern keyboard versus the super simple Apple IIe keyboard.

A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the Apple II keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customise the keyboard, and also pushes the problem of "programming" the keyboard from hardware into software, which reduces the cost of making the keyboard.

It's a tradeoff. Our computers now days are laggier, but they're also able to do a heckuva lot more than a computer 30 years ago, and as Luu notes, there are opportunities to cut down on lag, particularly when it comes to the lag induced by slower refresh rates on devices' displays. In the conclusion of Luu's piece, he says, "we're arguably emerging from the latency dark ages and it's now possible to assemble a computer or buy a tablet with latency that's in the same range as you could get off-the-shelf in the 70s and 80s."

The evidence is all around us. MSI brought the GS63VR laptop to market, and it has an option for a 120Hz refresh rate, which means considerably less lag than a laptop with a standard 60Hz refresh rate. Razer recognised the role refresh rate has on lag and introduced a phone with a 120Hz refresh rate too. Even Apple launched a 28cm iPad with a 120Hz refresh rate this year.

We're slowly, product by product, coming into a new age, where our computers might start feeling as fast as that Apple IIe gathering dust in your parents' attic. If you're curious about Luu's work and about computer latency in general, you can read more on his website.

While his work isn't as precise as what could be accomplished in a formal lab, it's a great first step in helping people understand where, and how, lag is introduced into their computing.

WATCH MORE: Tech News


Comments

    Great read. "Bloatware" is still alive and well. When ever I get a new PC (dare I say, a new "Advert" box for software...) I find I have to format and reload just so I can see some improvement in speed, although I'm sure many have found this necessary (this includes turning off a crapload of useless daemons/services depending on the OS I'm loading). Making computers "easier" to use for the masses certainly comes at a cost to the computation.

      Which ease of use elements do you think have contributed to the decline in responsiveness?

      I think it's more about the increased ease of software development than it is about making computers easier to use for end-users.

      Modern software has so many layers, mostly dynamically linked, that increased latency is unavoidable. I wouldn't necessarily say these layers make software easier to use, but they certainly make software easier to develop.

        From your comment below...
        my personal perception that software has become too complex
        Well thats kindve "my" definition of "bloatware". Although I think "complex" is the wrong choice of words, and as you said in regards to software dev, in today's environment of "virtually" unlimited everything (cpu/memory), it causes devs not to be complex, but lazy...if any thing devs are becoming less complex and letting abstraction do all the work, but abstraction from machine code always comes at a cost.
        Which ease of use elements do you think have contributed to the decline in responsiveness?
        Ill give you a great one that's so obvious most people don't even notice it...the GUI itself. I can run most of a computer/server from a console session (bash, shells etc). But its 2017, and Im well aware that we cant just switch back to a console, but the reality is, a console runs a lot faster than any GUI based system "windowed" system. I'm not complaining, I enjoy using GUIs, but the more I pile on top, the heavier the load gets.

    This is really interesting and reinforces my personal perception that software has become too complex.

    Too many frameworks, layers and interfaces.

      It's also hardware layers. Where as a keyboard used to have a dedicated input that was just each key strike fed into a multiplexer then into the pc where other stuff happened. now each key strike gets multiplexed, converted to USB interface. Then goes into the pc's USB bus which then links to the pci-e bus. Etc etc.

        Absolutely. In the old days, the Intel 8042 IO chip would provide the serial communications to the keyboard. Pressing a key would generate a hardware interrupt which could immediately be handled by software with extremely low latency. Similarly, glyph generation for the terminal was handled entirely in hardware.

        Nowadays, keyboards typically interface via USB, where each device has to be polled by the controller. There's also the matter of bus contention. No longer does it pass through the BIOS, but through a complex software stack before finally being passed to the application that has focus. Drawing the resulting character also then becomes vastly more complex.

        Yeah, Ive heard gamers still prefer PS/2 for controllers as they find the latency is lower, even better than USB3, although Ive never seen any evidence ;)

        While there are definitely more layers of hardware activity, it's not quite right to blame all of it on that. After all, you can send thousands of bytes across a PCIe link in the time it takes to run one 6502 instruction. Not sure under which circumstance they tested the Apple ][. On Commodore machines of that era, keyboard input was processed on the jiffy interrupt via polling -- the keys were actually row/column strobed though two parallel ports. So you had a latency of up to 16.67ms just to start look for keystrokes... because that was efficient, and no one was asking for it to go faster, not because it couldn't have.

    Any metric that has the TI99/4A at the top of the speed rankings is fundamentally flawed. I quite liked Apple IIe but the TI was so slow that the 300baud modem it was attached to seemed quite sprightly in comparison!

    (The TI had an ambitious BASIC with built-in sprite movements and collisions and such, but it was so slow that sprites routinely passed right through each other without the collision being detected.)

    What was the refresh rate of the monitor used with the apple? Would a modern pc have lower latency on that monitor?

    Not sure it matters that much unless you're a fast type.

    It's not the only thing that has got worse with time, loading times on video games consoles seem to get worse each generation now!

      CRT monitors pretty much have lower latency than any of the digital mediums.

        CRT is zero, hence why, you use them on classic arcade games, and not LCD, and LCD is a laggy as hell (latency).

      Apple displays probably weren't double buffered or synced in any way. They were using NTSC type displays -- the Apple ][ actually used some NTSC color tricks to get color -- so the refresh was always 16.68ms for half the display, 33.37ms for a full update. But statistically, only a fraction of that, since display update could happen at any time in the video scan.

      A modern computer going through the full OS will have to deal with display buffering, vsync-locked screen updates, software management of windows, text generation on a bitmapped display, etc. It's all useful stuff. On Windows, it's further complicated by the inane way Microsoft does window message queues -- not terrible useful, and also a slowdown.

      But if you wanted to test this apples to, well, Apples, the real test would be a keyboard to display measurement on a custom DirectX screen. That bypasses much of the OS stuff that Apple never had there in the first place in those primitive 4K machines.

      As well, there are dozens of other things that can be tested for latency. I usually see about 5ms latency though my Focusrite audio interface using ASIO drivers. I'm seeing a 10ms ping time between my computer here in Delaware, USA, and the gizmodo.com.au server, presumably somewhere halfway around the world.

    Not one mention of the useful response time that a human can detect. Once you past that point going faster is a waste of time.

Join the discussion!