When are CRACs a good idea for cooling my datacentre?

The nature of today’s IT equipment means that even small datacentres cannot function without some form of precision cooling solution, and the starting point for many will be CRACs.  CRACs are an established technology deployed in thousands of datacentres, but what is the argument for deploying it in yours?  How does it stack up against challenges like efficiency and reliability?

What is a CRAC?
How many will I need?
Where can I position CRACs for maximum effect?
What if my datacentre doesn’t have a raised floor or has pillars in the way?
How does CRAC stack up against other cooling approaches?
What else should I consider before going down the CRAC route?
Glossary of terms

What is a CRAC?

A CRAC is a Computer Room Air Conditioning unit.  It operates on the same principle as everyday air conditioning systems, taking hot air in and expelling cold air out.  

Unlike a traditional air conditioning unit, a CRAC can control humidity and so mitigate moisture damage to sensitive IT systems.  CRACs are designed to be precision managed and aligned to cope with the specific requirements of an IT infrastructure environment.  

CRACs date back to the earliest datacentres when facilities managers recognised the need to cool the air within the environment to avoid temperatures reaching levels that could adversely affect the performance of IT equipment components or damage them irreparably.  However, as datacentres have become extreme in their power consumption, density and criticality, CRACs have had to be deployed in increasingly innovative ways.

How many will I need?

The number of CRACs (and their size) required for a datacentre, should be determined by its layout, power load and future requirements.  The more power, the greater the heat dissipated.  The density of racks, their relative spacing from one another, and the availability of floor and ceiling space also have a significant bearing on CRAC deployment, as does the concept of peak cooling demand that CRACs will need to stretch to periodically in line with peak IT processing loads going through the datacentre at certain times.  You will also need to factor in the fact that CRACs are energy-intensive systems which, as well as consuming large amounts of electricity to perform their air-cooling role, also produce their own heat.

Calculating your CRAC requirement is not a straightforward process, but it can be governed by a simple principle: deploy as few as strictly necessary.  

This can be supported by deploying multiple CRACs as part of an interrelated system that circulates air (with the use of fans) as well as cooling it.  Judicious positioning of each CRAC is key to maximising efficiency of the system.

 

Where can I position CRACs for maximum effect?

CRACs shouldn’t just be placed in the hottest parts of the datacentre.  Such an unscientific approach might succeed in bringing your datacentre temperature down to a safe level, but at enormous cost to the wider environment and your operating budget.  You also run the risk of buying too many CRAC units, and running them so hard that they need frequent repairs and replacement – compromising the uptime of your mission-critical IT systems.

Perimeter cooling

The accepted approach is to create hot and cold aisles within the datacentre through the front-to-front alignment of racks and the carefully positioning of CRAC air intakes and exhausts.  For many years, datacentre designers invariably specified the use of a raised floor; the recess of which acts as the sink of cold air produced by the CRACs.  This cold air is directed towards the IT racks through perforated floor tiles, where it is heated by the IT equipment and pulled through to the hot aisle behind where the CRACs’ air intakes are situated, and so the cycle continues.  

This approach is commonly referred to as perimeter cooling because the CRACs are situated around the walls of the datacentre.

diagram of a crac

Perimeter cooling works fine for relatively modest IT loads of up to 5kW per rack.  However, whereas you used to get a lot of IT grunt out of 5kW per rack, today’s multi-core processors demand far more power within a smaller footprint – perhaps up to 15–20kW per rack or higher.  These produce more heat, hence far more intensive approaches to cooling have been developed using familiar CRAC principles.

These are:

In-Row cooling

This is where CRACs are positioned in between cabinets and at the end of rows so that cool air is produced closer to the servers that heat it.  

This approach not only enables a significantly higher proportion of the rateable cooling capacity of the CRAC units to be realised, it also allows for lower power draw on the CRAC fans by virtue of their proximity and the relatively short airflows involved.  Both effects reduce cost and environmental impact.

The other advantages of In-Row cooling are responsiveness and redundancy.  Individual CRAC units can be programmed to adjust their fan speeds and temperature settings dynamically as IT loads dictate.  Extra CRAC units can be added into the design to sit in standby mode ready to automatically take up the cooling load should another unit unexpectedly fail.   

In-Rack cooling

This is where CRACs are well and truly ‘in amongst’ the IT equipment at the rack level inside the cabinet, thereby enabling the maximum possible cooling delivery over the shortest possible distance to optimise efficiency.  This approach is more complicated to architect that In-Row cooling, and typically more expensive to deploy.  More often than not, it is the CRAC cooling approach of last resort; used to address the most extreme cooling requirements for particularly sensitive, mission-critical or just plain hot IT equipment.

Hot/Cold Aisle Containment Systems (HACS/CACS)

As stated above, the rudimentary approach to perimeter CRAC cooling is all well and good for low IT loads, but unravels somewhat as loads increase.  The issue isn’t just increased heat, but the increased jeopardy involved when hot and cold air mixes within the datacentre environment.

HACS/CACS technology addresses the issue of hot/cold air mixing by introducing physical barriers that keep hot and cold air completely separate.

Hot-Aisle Containment is the more popular approach of the two and is set up using wall panels to enclose the hot aisle section of the datacentre (i.e. the backsides of two opposing rows of cabinets).  Anyone working in the datacentre, attending to the front of the racks, experiences a pleasantly cool environment.

Cold-Aisle Containment tends to be slightly less popular as it orientates itself in the opposite way, enclosing the cool air in a corridor space that – while pleasant to work in – can feel slightly cramped.  CACS is easier to retrofit into an existing, non-contained hot/cold aisle setup however, because the racks are already facing the correct way.  Unless it is a completely new build, installing HACS will require existing racks to be turned 180 degrees.

Both HACS and CACS commonly make use of In-Row and, to a lesser extent, in-Rack cooling.

 

What if my datacentre doesn’t have a raised floor or has pillars in the way?

The raised floor has long been a recognised hallmark of high performance datacentres, but technology evolution and the development of more advanced cooling solutions has made hard-floor datacentres just as viable.  Indeed, many new build datacentres regularly make use of close-coupled cooling solutions and use overhead ducts to supply power and data connectivity.  

One way to use CRACs in a hard floor environment would be to use a suspended ceiling instead; deploying the system in an ‘upside-down’ alignment to the raised floor perimeter cooling approach using vented ceiling tiles.  

Using CRACs on the ground level of a hard floor environment compromises airflow, and this could equate to tens or even hundreds of thousands of pounds worth of energy wastage over the lifetime of a datacentre.  

Other datacentres will have obstacles such as pillars, pipes, jutting walls and cabling bundles that create unpredictable airflow problems and influence the positioning of CRAC units.  In theory, In-Rack cooling is the only CRAC approach completely unaffected by such constraints.  However, when you consider a HACS/CACS as an enclosed ecosystem solution, one can argue this is (by definition) also unaffected by any external factors, assuming you have sufficient space to locate it.  

Applying Computational Fluid Dynamics (CFD)

The intractable issue with datacentre cooling has always been a lack of certainty as to the benefit of a cooling design before you have committed to installing and running it.  This is simply because all datacentres are different in terms of shape, size, ambient conditions, IT load (which will be dynamic in any case), type of IT equipment and airflow obstacles.  Even then, you have the steady or sudden onset of future change to cope with too!

One solution is arguably to create a brand new contained environment, such as the HACS/CACS approach, because the manufacturer should be able to provide accurate estimates based on the known parameters of the enclosure.   

For all other instances, organisations are increasingly using Computational Fluid Dynamics (CFD); a form of advanced modelling software.  This maps your unique datacentre environment and all the power, cooling and IT infrastructure within it, revealing the airflow (air path, velocity etc.) and temperature, and how these would be impacted by specific environmental changes.  

In an instant, datacentre managers can visualise the optimum efficiency and effectiveness of their cooling strategy and use that evidence to help get investment decisions signed-off by non-technical colleagues.  

Specific use cases include:

  • Seeing the effect of increased IT virtualisation leading to higher density servers so that change can be managed.
  • Prioritising the most dangerous ‘hotspots’ that could shorten the life of IT equipment, or cause them to shut down without warning (all IT systems have a thermal cut-off limit to protect themselves from excess heat).
  • Determining a staged approach to PUE reduction
  • Gaining datacentre performance insights that conventional monitoring tools cannot provide.

    How does CRAC stack up against other cooling approaches?

    There is no getting away from the fact that CRAC technology has been around for a long time, but that the latest CRAC-powered solutions offer energy-efficient ways of maintaining a safe datacentre environment.

    CRACs faces stiff competition from alternative datacentre cooling approaches such as free cooling, liquid cooling and chilled-water cooling.  According to one analyst, these three will account for 75% of the global market by 2023 and CRACs will be mainly deployed in warmer climates where the others are less feasible.  

    This may be true for new-build datacentre deployments, or large scale fito-uts for major IT service providers, but not necessarily for smaller datacentres and/or those that must retrofit.  

    Many organisations are less concerned about the optimum design of a new datacentre environment and more focused upon how to extract maximum efficiency and achieve maximum resilience of their current, dynamic environment.  

    In these instances, unless there is a clear opportunity to embrace an alternative such as free cooling, and you can use CFD to prove the business case, you may be well served by some form of CRAC-driven solution.  

    Graph of cooling options

     

    What else should I consider before going down the CRAC route?

    Knowing whether CRAC is right for you, selecting and sizing the right CRAC for your needs, and architecting the appropriate strategy using (for example) containment and In-Row cooling should all be key components of your evaluation process.  The obvious omission from this list is, how is it going to be serviced and maintained?

    A regular CRAC service plan, accredited by the manufacturer and implemented in line with your needs, is essential to ensuring the continual uptime of your datacentre.  This should ideally go further than a routine maintenance programme that periodically tests and replaces parts, otherwise you run the risk of encountering problems after they have arisen, rather than before.

    Look to enshrine the principle of proactive support by using an accredited service and maintenance provider with remote monitoring capabilities.  

    This way, you have the peace of mind of knowing emerging issues (and new efficiency opportunities) are being attended to and can gain immediate visibility of cooling performance and efficiency, 24/7.  You stand to derive additional benefit if this is the same organisation who helped design and implement your CRAC solution, as they will have an advanced understanding on its ongoing evolution and how to address your needs as they change over time.  

    Glossary of Terms

    CACS

    Cold-Aisle Containment System.  An enclosed datacentre environment that uses physical barriers to prevent hot and cold air mixing so that the cooling process is as efficient as possible.  In CACS, the cold aisle between the front sides of two inward facing rows of server cabinets is contained, leaving the rest of the datacentre space warm/hot.

    CFD

    Computational Fluid Dynamics – a method of applying the laws of physics to model and visually represent the present or anticipated behaviour of a gas or liquid as it interacts with objects or other gases/liquids.  Used in datacentres to model flows of hot/cool air.

    CRAC

    Computer Room Air Conditioning unit – operates on the same principle as everyday air conditioning systems, in that it takes hot air in and expels cold air out.  However, unlike a traditional air conditioning unit, a CRAC can control humidity and is designed to be precision managed and aligned to cope with the specific requirements of an IT infrastructure environment.  

    HACS

    Hot-Aisle Containment System.  An enclosed datacentre environment that uses physical barriers to prevent hot and cold air mixing so that the cooling process is as efficient as possible.  In HACS, the hot aisle between the rear sides of two outward facing rows of server cabinets is contained, leaving the rest of the datacentre space cool.

    In-Rack Cooling

    The deployment of specially designed, rack-mountable form factor CRACs to deliver highly targeted, extremely close-proximity cooling.  Similar to In-Row cooling in its ability to achieve greater efficiency and effectiveness than perimeter cooling.

    In-Row Cooling

    The deployment of specially designed CRAC units in between and alongside server cabinet rows, shortening airflow distances and focusing the delivery of cooling where it is needed by IT equipment in a more efficient manner than traditional perimeter-based approaches.

    kW

    1,000 watts. Watts are the unit of measurement for real electrical power.

    Perimeter cooling

    A strategy for deploying CRAC units at the perimeter of a datacentre and using floor or ceiling recesses to circulate air into non-contained hot and cold aisles.

    PUE

    Power Usage Effectiveness is a ratio expressing the efficiency of total power delivered to a datacentre facility to be used by the computing equipment within in.  The lower the PUE (1:1 would be the lowest theoretically possible), the more efficient the datacentre is at converting its electricity consumption into value-generating IT-driven activity.  Cooling datacentre IT equipment is typically the greatest challenge to achieving a low PUE.

    Uptime

    The track record of availability performed by IT systems over a given period. Uptime is expressed in percentage terms (e.g. 99.999% uptime) and normally covers one year.

     

    Here to help

    Let’s talk about your digital aspirations and the next steps to take you on your journey to hybrid cloud. Get in touch and we’ll show you the way.