Facebook's new Graph Search is an ambitious project and brings with it the need for some serious computational grunt. Here's how Facebook is taking on that challenge.
Slashdot has a great in-depth article about Facebook's solution, called the Disaggregated Rack. Though it may sound like an elaborate torture device, it's actually a clever solution which will make Facebook's search system more flexible and efficient. Essentially, Facebook is breaking down its computational power down into separate modules that can be easily switched in and out:
Compute: A server with 2 processors, 8 or 16 DIMM slots, no hard drive, a small flash boot partition, and a "big NIC" with plentiful throughput to enable network booting.
RAM Sled: Facebook wants to replace the leaves and run it on a RAM sled with between 128 GB and 512 GB of memory, and for $US500 to $US700 per sled. Only a basic CPU would be needed. Total queries would be 450,000 to 1 million key queries per second.
Storage: Facebook's solution here is based on its Knox storage design (PDF). The I/O demands are low: 3,000 IOPS or so, Taylor said. But Facebook only wants to spend $US500 or $US700 apiece, excluding the cost of the drives.
Flash Sled: Facebook would like between 500 Gbytes to 8 terabytes of flash, with 600,000 IOPS. Excluding flash costs, Facebook would like the solution to cost around $US500 to $US700 apiece.
Facebook anticipates that Graph Search will initially use 20 computer servers, eight flash sleds, two RAM sleds and a storage sled; in total, that'll provide 320 CPU cores, 3TB of RAM and 30TB of flash. The beauty of the setup is that it will allow Facebook to easily upgrade in the future: right now, for instance, its RAM-to-flash ratio is 1:10, but it will have to climb to 1:5 to meet future targets. In other words, Facebook will be able to wheel in more grunt with minimal fuss -- and get on with the job it really cares about. [Slashdot]