Category Archives: Energy
Matt Corddry, Facebook’s director of hardware engineering, should be grateful to Tesla. Not because he drives one (he doesn’t), but because the popularity of its electric cars could help Facebook take a little more cost out of running its data centers. Corddry runs Facebook’s hardware engineering lab, which designs the cutting-edge servers, storage gear and other equipment that power its services. It shares those designs with the outside world through the Facebook-led Open Compute Project, and one of the technologies on his mind these days is lithium-ion batteries.
SAN JOSE, Calif. – Over the last three years, Facebook has saved more than $1.2 billion by using Open Compute designs to streamline its data centers and servers, the company said today. Those massive gains savings are the result of hundreds of small improvements in design, architecture and process, write large across hundreds of thousands of servers.
When Facebook opened its data centre in Luleå in northern Sweden, the first it had built outside of the US, it raised more than a few eyebrows – and questions. Why Sweden? Why so far away from any major population centre? And isn’t it something of a risk to fill it with bespoke equipment based on the Open Compute Project, rather than conventional, off-the-shelf servers from HP or Dell?
Data centre operators can dramatically cut energy costs and their impact on the environment by doing without air conditioning, according to research by Facebook. According to V3.co.uk, the findings come from the firm’s Open Compute Project, aimed at making the social network’s IT operations as efficient as possible. Facebook said that it uses “100 percent outside air” to cool all of its own data centres, and that other data centre operators are typically over-cooling their facilities when they do not really need to do this.
Many tech companies are implementing green practices in manufacturing and development, but Facebook has an original approach: Using potatoes in its servers to make them more environmentally friendly. Under the Open Compute Project (OCP), Facebook is on a mission to improve the efficiencies of the servers, storage devices, and data centers that are used to power its social networking platform. Any breakthroughs that the company makes are shared with the rest of the OCP community so that they too can improve their own efficiencies and reduce the overall environmental impact of IT on the world.
Facebook’s data center energy use grew 33 percent in 2012, as the company installed tens of thousands of servers in its new company-built data centers. The growth of the company’s power usage is disclosed in the company’s latestsustainability report, which also documents the company’s move to reduce its computing footprint in Silicon Valley, even as it boosts its reliance on leased space in northern Virginia.
When I first visited Facebook’s data center in Prineville, Ore., in 2011, I felt privileged to spot some figures on the facility’s power-usage effectiveness (PUE) on a screen affixed to a wall. The PUE number, which gives a sense of how much of the energy gets consumed by computing gear, wasn’t exactly what some reporters wanted to know — total number of megawatts would have been better than PUE, and that sort of information came later — but it was a start toward transparency. Now, the PUE data won’t be such a big deal to catch a glimpse of anymore.
Facebook’s data center in Prineville, OR, has been one of the most energy efficient data center facilities in the world since it became operational . Some of the innovative features of the electrical distribution system are DC backup and high voltage (480 VAC) distributions, which have eliminated the need for centralized UPS and 480V to 208V transformation.
CIO — Facebook’s state of the art data center in Oregon uses 38 percent less power and costs 24 percent less to run that its older data centers. These figures are astounding, and they should certainly make any cost-conscious CIO sit up and take notice. What’s the secret behind these savings? The company honored its hacker roots by custom-designing both the data center itself, and the servers (and management tools) inside it, from the ground up. It’s akin to what Google has been doing for the past 10 years or so, but the good news is that–unlike Google–Facebook has not kept what it has achieved and how it has achieved it a secret at all.
Many data centers sit on a lot of “cold storage” — servers containing terabytes of user data that must be retained but is rarely accessed, because users no longer need that data. While the servers are considered cold because they are rarely utilized, their hard drives are usually spinning at full speed although they are not serving data. The drives must keep rotating in case a user request actually requires retrieving data from disk, as spinning up a disk from sleep can take up to 30 seconds. In RAID configurations this time can be even longer if the HDDs in the RAID volume are staggered in their spin up to protect the power supply. Obviously, these latencies would translate into unacceptable wait times for a user who wishes to view a standard resolution photo or a spreadsheet.
Facebook continues to build out its infrastructure and add servers at a rapid rate.According to local reports, Facebook is adding a third small data center at its Prineville, Oregon facility, next to the two larger data centers already built. The current Prineville data centers are 334,000 square feet, while the new one will be 62,000 square feet. The new one also won’t add any jobs to the region.
For Facebook, good data center design is all about efficiency — how efficiently we use energy, materials, and water, and how they tie together to bring about cost efficiency. We’ve previously shared information on energy and materials efficiency, and today we’re releasing our first water usage effectiveness (WUE) measurements and information on how we’ve achieved what we think is a strong level of efficiency in water use for cooling in the first building at our Prineville, Ore., data center (which we’ll call Prineville 1 here).
FORTUNE — Facebook is known for creating the most popular social networking tool, not designing hardware. But the company has taken a do-it-yourself approach to building out its data centers and the servers and racks that fill them. The result? Data centers that are 38% more efficient and 24% cheaper than average, according to Frank Frankovsky, director of hardware design and supply chain at Facebook.
The server business last year netted vendors $34.4 billion on sales of 8
bmillion servers according to IDC, but those numbers don’t show how that business is changing. For that compare the growth in the traditional x86 market that sold those 8 million servers which grew a mere 3.7 percent year over year, to what IDC calls the densely optimized servers used in webscale deployments. That segment grew by 51.5 percent year over year in units sold, and now represent 3.2 percent of all server revenue and 6.1 percent of all server shipments.
In 1893, Rudolf Diesel was awarded a patent for the diesel engine. Gandhi committed his first act of civil disobedience. Thomas Edison created the movie studio. And zany New Zealand became the first country to give women the right to vote. Nabisco invented Cream of Wheat. It was also the year that direct current (DC) took a back seat to alternating current (AC) after Niagara Falls Power Company chose AC transmission for its power plant.