How much power can you fit into a single data centre rack?

power in the data centre

I’ve been in the IT industry for over 20 Years. Having started as a YTS trainee engineer, the first system that I worked on was an old (and it was old in ’94) IBM system 38. I think my boss described it as the last real man’s machine which IBM made, in that it was 3 phase and was replaced in the late 80’s by the AS/400 range of systems which ran off single phase 16 or 32 Amp supplies.

Over time hundreds of new models have been launched and all bar a few high end IBM systems (IBM SP, p670/90, and 595/795) the mainstay was made up from 16 or 32 Amp feeds.

Of course in recent times we have become more energy focused. I guess this has been a good thing. The less power a server draws means the less power required across the data centre for cooling and for losses of power across inefficient UPS systems.

Today most of the companies that I see have a pair of 32 Amp feeds into each of their racks for resilience; however they rarely draw more than 16Amps of power. Of course there are also exceptions to this in the form of large storage devices and racks full of blade chassis. Although in recent times many data centres have put restrictions upon the amount of power draw per rack. This is not because the data centre provider cannot provide that much more power into the rack, it’s mainly because most commercial data centres cannot deal with the heat output because a fully loaded rack can kick out over 100,000BTU/hr, which is considerable.

I was recently working with a large Retail client who’s IBM reseller had sold them 3 of IBM’s Pureflex chassis and V7000 storage in a single rack. Whilst the IBM Pureflex is undoubtedly a great product and is being used to replace IBM 595 servers to run the companies SAP, the customer had not really thought of the power requirement other than asking whether they required 3 phase (Pureflexx is single phase).  However when they went to install, they realised that the power draw was going to be in the region of 26kVA (over 100Amps) for just the servers and as their server room had no cold or warm aisle containment, probably at least another 50% on top to cover cooling and UPS power losses. That’s considerable.

Move up the computing power ranks another client I’ve done a whole load of work with uses Dell servers with GPU’s (I hate Dell as I’m an IBM technology man through and through). Previously the customer had stacked 40 1U Dell servers into each rack which had been drawing just over 40 Amps per rack (a touch under 10kVA per rack). However the customer who works in the Oil and Gas exploration industry has recently rewrote their applications to work with the new GPU’s which run in Dells new range of PowerEdge servers. As I learned myself a CPU normally has 4 to 16 CPU cores, while a GPU consists of hundreds of smaller cores together crunching through the application data.  

Effectively the client has built a monster of a machine, true high performance computing on a scale more powerful than multiple $M mainframes, yet at a fraction of the cost. Of course the only down side is the immense volumes of power these things consume. We have had to configure each rack in our Data Centre to allow for dual feed for resilience and 128Amps of power draw (around 30kWA).

GDPR

  • Cookies

Cookies

This website and our partners use cookies to provide you with the best experience. By clicking, “Accept” or continuing to browse, you agree to the use of cookies. Read our privacy policy here.