Category Archives: James Hmilton
Like most large data center operators, Facebook doesn’t disclose how many servers it has running in its data centers. But James Hamilton has used the company’s recent disclosures about its energy usage to do some interesting math to try and put together an estimate, which he has shared on his Perspectives blog.
You want to build a leaner, greener data center? James Hamilton, VP and distinguished engineer for Amazon Web Services, has some suggestions for you. Speaking at a recent Open Compute Project event, Hamilton, who’s something of a rock star among the web infrastructure crowd, provided some food for thought. Here are some of his tips.
Last week Facebook announced the Open Compute Project (Perspectives, Facebook). I linked to the detailed specs in my general notes on Perspectives and said I would follow up with more detail on key components and design decisions I thought were particularly noteworthy. In this post we’ll go through the mechanical design in detail. As long time readers of this blog will know, PUE has many issues (PUE is still broken and I still use it) and is mercilessly gamed in marketing literature (PUE and tPUE).
The pace of innovation in data center design has been rapidly accelerating over the last 5 years driven by the mega-service operators. In fact, I believe we have seen more infrastructure innovation in the last 5 years than we did in the previous 15. Most very large service operators have teams of experts focused on server design, data center power distribution and redundancy, mechanical designs, real estate acquisition, and network hardware and protocols. But, much of this advanced work is unpublished and practiced at a scale that is hard to duplicate in a research setting.