For the past several years, Amazon has been quietly building one of the fastest cloud networks in the history of computers. Except it doesn't exist in any room — it's spread across the entire world. Virtual computers are now supercomputers.
The system, which Amazon calls EC2, allows customers to hop in, rent out a small sliver of Amazon's processing behemoth, and then duck out — all without having to invest in hardware of their own. Amazon takes care of everything, and you just run your data through its mill. While it's been cranking up EC2, Amazon's done a lot more than create a convenience: its cloud is now the 42nd fastest "computer" in the entire world, clocking in at 24 teraflops of total processing power.
It's a hell of a lot easier than the alternative, Wired reports, quoting one of EC2's clients:
"It's just absurd," he says. "If you created a 30,000-core cluster in a data centre, that would cost you $US5 million, $US10 million, and you'd have to pick a vendor, buy all the hardware, wait for it to come, rack it, stack it, cable it, and actually get it working. You'd have to wait six months, 12 months before you go it running."
Instead, one can rent the same amount of processing punch for a little over a thousand dollars an hour — dirt cheap in the mega-computing world. The availability of tens of thousands of cores on demand is not only good business and an impressive achievement for Amazon, it means good things for science. When I spent the day at the American Museum of Natural History, an oft-cited roadblock was getting hands-on time with these supercomputers, which are a hot commodity in the research world. But with the availability of a supercomputer that, as Wired puts it, doesn't actually exist, plugging in is easier than ever. Rather than having to compete over so-and-so university's hardware, Amazon has ample space and the capacity to run tons of virtual servers at the same time. This is good news for anyone who needs to crunch big numbers.