new-facebook-datacenter-in-prinevilleIn this post Julius discusses some of the innovations that are cropping up in forward thinking data center design ranging from the adoption of Yahoo’s chicken coop architecture that is suited for utilizing ambient air cooling; new server designs optimized for hot aisle/cold aisle architecture; and innovative approaches to power supply. While most operators do not have the deep pockets and resources of players like Facebook, Google, Yahoo or Amazon — all mentioned in this post — the kinds of forward thinking innovations being pioneered by these companies are bound to have a wider impact.

by Julius Neudorfer, CTO and founder of North American Access Technologies, Inc. (NAAT). He has written numerous articles for various IT and Data Center publications and has delivered seminars and webinars on data center power, cooling and efficiency and is the author of the Hot Aisle Insight blog. Connect with Julius on Linkedin.

In the race for the lowest power usage effectiveness (PUE) and other green IT bragging rights, Facebook seemed to pull ahead with a claimed 1.07 PUE. By launching its Open Compute Project website on April 7, Facebook has decided to disclose details about its newest Prineville, Ore., data center for all to see, review and possibly emulate.

Unlike other high-profile PUE claims from giants such as Google and Yahoo, which provided some generalized information about what they did to improve efficiency in their designs, Facebook has shared a significant amount of detailed information. In fact, it has openly invited others to use the info to build their own data centers using some or all of its technology and designs.

For starters, Facebook, like Google, decided to custom-build its own “no frills” servers. These servers are non-standard in virtually all respects. They are approximately 1.5 Rack Units high, but are not designed to be mounted in a standard rack. They have a custom designed drawer-type chassis, which slides in backwards (all connectivity in front) into a custom enclosure with 30 server “slots.” This reverse connectivity design is purposeful, since it eliminates the normal need to access the rear of the server cabinets. As a result, Facebook is able to fully contain the rear hot aisle and run it at over 100 degrees Fahrenheit adding to cooling efficiency, yet not expose or stress the personnel when they need to add or replace servers. This custom enclosure is part of a “Triplet Rack” assembly. A module consisting of two Triplet Racks surrounds a 48V backup battery cabinet to form a “six pack” of racks, housing and powering a cluster of 180 servers, each with 2 CPUs and up to six hard drives.

Facebook2

The servers themselves are devoid of anything that is not functional, such as a fancy plastic vendor logo faceplate (what would it say, “Facebook” model “MZ-1”?). The motherboard is custom, but fairly conventional. However, from a power supply design perspective, they seem to be truly unique, the server utilizes a single 450-watt custom power supply that accepts 277V AC as the primary input to negate the need for a step-down transformer.

Moreover, there is no traditional UPS in the Facebook Open Compute 1.0 design specifications. Instead the “ride-through” power (should the utility power fail and before the backup generators pick up the load), is provided by a bank of batteries at 48 volts (a long-time telco standard). However, what is very different is that their custom server power supply is designed to accept both 277V AC as the primary source and 48V DC as backup power. But the backup reserve time is far from what traditional data center users would find in their comfort zone – only 45 seconds of battery time. Even more radical, yet logical, the 48V DC section of the server power supply can only operate for 90 seconds at full load before it automatically shuts down to prevent thermal overheating.

From the published specifications of the Facebook Battery Cabinet:

The operating temperature range of the battery cabinet is +5°C to +45°C (+41°F to +113°F).
Note: The ideal VRLA battery temperature for longest service life is typically +25°C (+77°F).

The battery cabinet provides 45 seconds of runtime at full load (at end-of-life).

For comparative reference, Google has had its own custom servers using onboard, sealed gel-cell batteries for several years as power backup to avoid the need for a UPS. This was part of Google’s underlying computing topology of no single point of computing failure, as it now is seemingly so with Facebook’s new site. This computing architecture allows for a certain amount of expected individual failures amongst thousands of compute nodes, without impacting the overall performance of the computer “hive.” Of course, this acceptance of node failures as part of the overall computing architecture may be fine for mass search, but it is not necessarily appropriate for real-time financial or enterprise applications.

As unusual and innovative as the power system is, most of the efficiency claimed is due to non-traditional cooling. Facebook’s data center cooling follows in the footsteps of the Yahoo “Chicken Coop”, which also uses outside air for cooling instead of traditional mechanical cooling. While this is not unique to Yahoo, it is the most recently visible, large-scale example of an air-side “free cooling”-based data center.

In the case of Facebook’s Prineville site, it also adds moisture into the incoming air flow to enhance the net cooling capacity when required. Of course by using these unconventional cooling techniques (unconventional only at the moment), it does not need to follow or comply with the 2008 ASHRAE TC9.9 recommended environmental envelope. But even as I write this blog, ASHRAE is in the process of broadening TC9.9’s allowable environmental envelope, which will be released later this year in the third revision of the TC9.9 specifications.

There does not seem to be any breath of details on the networking infrastructure. The only published information is that there are two top-of-rack switches that are directly powered by the 48V DC battery system.

Most traditional enterprise and co-lo data center designers and operators will say that all this non-standard design and custom equipment is technically interesting, but is does not change what they will do in their own data centers. While that may be true for the moment, however, just like the leading-edge technology that is used in race cars, some of it eventually gets incorporated into everyday vehicles.

Following that thought, I had the opportunity to interview Greg Huff, CTO for Industry Standard Servers, at Hewlett-Packard, about its involvement in the Facebook Open Compute Project. He stated that HP is now going to offer a 277V AC power supply that will fit any standard server that HP makes. Moreover, it is also going to be offering a complete system with a 480V UPS and 277V Power Distribution system, to support this new power supply. This should be almost as efficient as the Open Compute system, but with the added security and power conditioning and longer backup redundancy of a conventional UPS that data centers users expect. It also allows for the choice of a full range of HP standard servers. It may take a long time to get the U.S. data center industry to consider and use 277V power instead of the existing 208V that is the standard. However, if this proves successful for HP, how long will it be before other major vendors such as IBM and Dell offer a 277V power supply as an option to their server line?

For further reading on some of the deeper level goals that need to be kept in focus if your green data center project is going to be a success see our related post: So You’re Building a “Green” Data Center”

The Bottom Line

Like a game of ping pong, the search and social media giants all continue to claim and become more energy- and cost-efficient. They are driven to do this both from a practical point of view (CAPEX and OPEX), as well as for the public relations benefits of green and sustainability claims. Of course, Amazon deserves an honorable mention in this trifecta, even though it has not laid any direct claims to a hyper-efficient data center infrastructure for its EC cloud (and also for its now-popular Kindle, which may save millions of trees from being turned into paper books).

Traditional design firms will no doubt scoff and find flaws with some the design parameters and operational assumptions of Facebook’s Prineville site, while at the other end of the spectrum, those that only have a passing knowledge of data center infrastructure may see this as the perfect answer and wonder why everyone’s data center is not built like this.

Clearly, not many organizations can replicate or use all of these designs because of the massive scale necessary to cost-justify custom-built servers and non-standard infrastructure. Nonetheless, as the overall differences in computing architecture continue to evolve, the lessons learned and the actual “outside the envelope” operating experience may help shape the direction of forward-thinking data center designers. Hopefully, they will consider and evaluate what Facebook and others are doing and use it to improve the efficiency of new data center designs, even if they utilize more conventional equipment.

So unlike betting the trifecta in a horse race, there really are no losers in the race for the lowest PUE.

To read some other best practices that can help reduce energy consumption in data centers see our related post: “Best Practices for Greening Your Data Center

Line Break

Author: Julius Neudorfer (3 Articles)

Julius Neudorfer is the CTO and founder of North American Access Technologies, Inc. (NAAT). Julius is a member of AFCOM, ASHRAE, BICSI and IEEE, as well as a Certified Data Center Design Professional “CDCDP” designer and instructor. He is the the author of the Hot Aisle Insight blog. Connect with Julius on Linkedin.