Monthly Archives: September 2014
There are certain ways of doing things in hardware engineering, and engineers simply follow these rules because there’s no use fighting them even if they wanted to. Frankly, most don’t even think about it because it’s just a given, but during a recent tour of Facebook’s hardware lab, director of engineering Matt Corddry, says Facebook scale requires them to rethink the old rules and let engineers imagine outside industry standards.
The Open Compute open source hardware project got its start more than three years ago when social media giant Facebook, which was at the time using custom server designs it created in conjunction with Dell, sat down with Rackspace Hosting and a number of other big datacenter operators and worked out the means of open sourcing hardware designs to drive innovation in servers, storage, switches, and datacenters and to drive down the cost of that infrastructure at the same time.
Saying you don’t have to be Facebook to take advantage of open source hardware the social network’s engineers have designed for it, Vantage Data Centers is pitching its Santa Clara V2 facility in the Silicon Valley to customers interested in using the stripped-down, or “vanity-free,” gear to power their applications. Vantage has become a member in the Open Compute Project, the Facebook-led open source hardware design initiative. Other data center providers that support the project include Rackspace and IO, both of whom have built public cloud services using the designs.
Facebook is kicking off the week with a number of new bullet points on its open source agenda. Introduced at the @Scale technical summit in San Francisco on Monday, the social network is targeting engineers who build or maintain systems that are designed for scale. The first project is dubbed “mcrouter,” a memcached protocol being unleashed under an open-source BSD license.
Open source “vanity-free” hardware bought in bulk from Taiwanese manufacturers may offer a compelling price difference for the scale of Facebook, but people who work in the majority of the world’s enterprise IT shops generally don’t view servers promoted by Facebook’sOpen Compute Project as something that makes sense for their data centers. Yesterday we wrote about the role of OCP in the enterprise data center as seen from OCP hardware vendors’ and the Open Compute Foundation’s perspectives. Today, we’ll cover opinions of data center industry experts who work with enterprise data center end users.
Facebook’s Open Compute Project has been one of the most talked about developments in the world of data center hardware over the past couple of years, and interest in the first ever open source hardware design community and its output has only grown. Facebook has publicly said it saved more than $1 billion as a result of using Open Compute gear in its data centers, and companies like Rackspace and IO have built cloud infrastructure services using Open Compute server designs. Earlier this year Microsoft said it had adopted OCP specs for the infrastructure that supports its entire portfolio of online services, including Azure.
Facebook announced the Open Compute Project in 2011 as a way to openly share the designs for its data centers — “to spark a collaborative dialogue … [and] collectively develop the most efficient computing infrastructure possible.” Starting in 2009, three Facebook employees dedicated themselves to custom-designing servers, server racks, power supplies, UPS units, and battery backup systems for the company’s first data center in Prineville, Oregon.