Category Archives: Cooling

Facebook claims data-centres ‘over cooled’

Image Data centre operators can dramatically cut energy costs and their impact on the environment by doing without air conditioning, according to research by Facebook. According to V3.co.uk, the findings come from the firm’s Open Compute Project, aimed at making the social network’s IT operations as efficient as possible. Facebook said that it uses “100 percent outside air” to cool all of its own data centres, and that other data centre operators are typically over-cooling their facilities when they do not really need to do this.

http://www.racplus.com/news/facebook-claims-data-centres-over-cooled/8653373.article?blocktitle=Latest-news&contentID=2332

Facebook’s eco strategy features spuds in servers

green recycling symbolMany tech companies are implementing green practices in manufacturing and development, but Facebook has an original approach: Using potatoes in its servers to make them more environmentally friendly. Under the Open Compute Project (OCP), Facebook is on a mission to improve the efficiencies of the servers, storage devices, and data centers that are used to power its social networking platform. Any breakthroughs that the company makes are shared with the rest of the OCP community so that they too can improve their own efficiencies and reduce the overall environmental impact of IT on the world.

http://www.pcworld.com/article/2049143/facebooks-eco-strategy-features-spuds-in-servers.html

Facebook: Data centres do not need air conditioning

Facebook Like made from ice at the Lulea data centre SwedenData centre operators can dramatically cut energy costs and their impact on the environment by doing without air conditioning, according to Facebook. The findings come from the firm’sOpen Compute Project, aimed at making the social network’s IT operations as efficient as possible. Facebook said that it uses “100 percent outside air” to cool all of its own data centres, and that other data centre operators are typically over-cooling their facilities when they do not really need to do this.

http://www.v3.co.uk/v3-uk/news/2295906/facebook-data-centres-do-not-need-air-conditioning

Cooling an OCP Data Center in a Hot and Humid Climate

Building a data center based on Open Compute Project designs in a relatively hot and humid area like Forest City, North Carolina, presented some interesting challenges. Chief among them, of course, was whether the 100% outdoor air cooling system Facebook debuted in our Prineville, Oregon, facility could operate as efficiently in an environment where the ASHRAE 50-year maximum wet bulb temperature is 21% higher, at 84.5°F instead of 70.3°F.

http://www.opencompute.org/blog/cooling-an-ocp-data-center-in-a-hot-and-humid-climate/

Humidity Excursions in Facebook Prineville Data Center

Facebook’s data center in Prineville, OR, has been one of the most energy efficient data center facilities in the world since it became operational [1]. Some of the innovative features of the electrical distribution system are DC backup and high voltage (480 VAC) distributions, which have eliminated the need for centralized UPS and 480V to 208V transformation.

http://www.electronics-cooling.com/2012/12/humidity-excursions-in-facebook-prineville-data-center/

Water Efficiency at Facebook’s Prineville Data Center

PRN1cooling 600x3For Facebook, good data center design is all about efficiency — how efficiently we use energy, materials, and water, and how they tie together to bring about cost efficiency. We’ve previously shared information on energy and materials efficiency, and today we’re releasing our first water usage effectiveness (WUE) measurements and information on how we’ve achieved what we think is a strong level of efficiency in water use for cooling in the first building at our Prineville, Ore., data center (which we’ll call Prineville 1 here).

http://www.opencompute.org/blog/water-efficiency-at-facebooks-prineville-data-center/

Facebook Auto-Cools Servers With Air-Pressure Sensors

Facebook is exploring a system that would cool its server by automatically shifting loads between servers. (Photo: Wired/Pete Erickson) Facebook is exploring a technology that controls temperatures in the data center by automatically moving software workloads among servers according to the air pressure on either side of each machine. As noticed by Data Center Knowledge , the technology is laid out in a Facebook patent application  recently released to the world at large.

http://www.wired.com/2012/03/facebook-data-center-patent/

Secret to Facebook’s green data center? Water misters

Image Mist me: Facebook’s data centers uses an evaporative cooling system in which air is cooled and humidified with misters, rather than the typical chiller. (Credit: Screen capture by Martin LaMonica/CNET) Facebook’s state-of-the-art data center houses awesome amounts of computing power, but the biggest technical challenge has been the air handlers.

http://www.cnet.com/news/secret-to-facebooks-green-data-center-water-misters/

Perspectives – Open Compute Mechanical System Design

Image Last week Facebook announced the Open Compute Project (Perspectives, Facebook). I linked to the detailed specs in my general notes on Perspectives and said I would follow up with more detail on key components and design decisions I thought were particularly noteworthy. In this post we’ll go through the mechanical design in detail. As long time readers of this blog will know, PUE has many issues (PUE is still broken and I still use it) and is mercilessly gamed in marketing literature (PUE and tPUE).

http://perspectives.mvdirona.com/2011/04/09/OpenComputeMechanicalSystemDesign.aspx

Open Compute Project – James Hamilton

5ab07f1a9406e4cca807cb65d5a2dd07[1]The pace of innovation in data center design has been rapidly accelerating over the last 5 years driven by the mega-service operators. In fact, I believe we have seen more infrastructure innovation in the last 5 years than we did in the previous 15. Most very large service operators have teams of experts focused on server design, data center power distribution and redundancy, mechanical designs, real estate acquisition, and network hardware and protocols.  But, much of this advanced work is unpublished and practiced at a scale that  is hard to duplicate in a research setting.

http://perspectives.mvdirona.com/2011/04/07/OpenComputeProject.aspx