April 20th, 2019
– No air conditioning is required
– No water is consumed
– Electronics completely isolated from ambient air
– The power needed to cool data centers is reduced by up to 98%
– Waste heat from servers can be recovered and reused
– Data centers run silently
– Dielectric heat transfer fluid is never replaced and has a GWP of zero
– Causes of server failures are reduced and enclosures can be reused
– E-waste is cut significantly
– Scalable in form factor from the Far Edge, to the Aggregated Edge, to the Regional Data Center to the Central Data Center
Cooling Data Centers – The End Game by Herb Zien
An Analogy – Lighting
Incandescent lighting was the standard for 100 years. This technology is so inefficient that scientists spent decades searching for an alternative, and since the 1980s most engineers assumed that Light Emitting Diodes would be the end game. However, LED technology was not ready for commercialization because bulbs could not meet market requirements in terms of color and cost. An entire industry, Compact Fluorescent Lighting, sprang up to transition from legacy to LED. Patents were filed, factories built, and distribution channels created, but all of this was temporary. When was the last time you bought a CFL bulb?
Legacy Technology for Cooling Data Centers
Immersing electronics in air is the legacy technology for data center cooling. Like incandescent bulbs, this is extraordinarily inefficient because air does not conduct heat. Massive fan power is required to overcome the natural inclination of air to keep heat moving from where it is generated to where it needs to be rejected. In addition, air conditioning, either direct expansion or evaporative cooling, usually is required to promote heat transfer. As a result, half the energy consumed by most data centers is wasted cooling the electronics.
Transitional Liquid Cooling Technologies
Transition liquid cooling technologies sprung up to partially address air-cooling deficiencies, Cold Plates and In-Row Cooling. Cold Plates, originally developed to cool high-power processors in computer gaming products, are being modified for data center applications. Standard heat spreaders mounted on processors are replaced with heat exchangers that transfer heat from the processor to a fluid, usually water, which is pumped to a second exchanger that rejects the heat to air. Cold Plates are often referred to as “direct contact” systems, but there is no direct contact; only the hottest electronic components are covered by Cold Plates and there is a heat transfer barrier between the processor and transfer fluid. Furthermore, only about half the heat generated in a server is removed by the fluid; the rest is blown into the room by fans. In-Row Cooling technology makes the room smaller. In legacy data centers a huge volume of air is cooled to remove heat generated in racks that occupy only about half of the floor space. In-Row Cooling, essentially mini-HVAC systems comprising fans and water-to-air heat exchangers, are installed in the racks to reduce the volume of air that must be cooled. This technology removes heat but is expensive and introduces water to the data center.
Total Immersion, where all heat generating components are immersed in a heat-conductive but electrically non-conductive medium, is a more energy-efficient solution than Cold Plates or In-Row Cooling systems, but most approaches are not ready for prime time. There are two forms of Total Immersion cooling, Two-Phase and Single-Phase. Two-Phase Immersion systems are Tank-Based and electronic components are submerged in a refrigerant bath. Boiling occurs on the surface of heat generating components and vapor passively rises to the top of the enclosure, where it condenses on water-cooled coils and falls back into the tank. This technology has major drawbacks which include:
- The dielectric refrigerant is very expensive
- The refrigerant is highly volatile and evaporates when the lid is opened for maintenance
- Cavitation associated with boiling can erode electronic boards
- Some refrigerants are toxic
- Water is required to condense the refrigerant vapor
Single-Phase Immersion systems can be either Tank-Based or Rack-Based.
Tank-Based systems resemble a rack tipped over on its back, with modified servers inserted vertically into slots in the tank. Drawbacks include:
- Scalability because the tank takes a lot of floor-space
- Weight because there is a large volume of fluid
- Messiness because access to electronic components requires aprons and gloves
- Some systems use a mineral oil dielectric, which often has impurities
- All cooling is by bulk flow, which reduces efficiency
- Water is required to cool the dielectric
Rack-Based Single-Phase Total Immersion – The Final Answer
Rack-Based Single-Phase Total Immersion is the ultimate solution for cooling data centers. This technology, perfected by LiquidCool Solutions (LCS), overcomes negative perceptions the market may have had relating to immersion cooling. Scalable in data center applications LCS-cooled hardware is Rack-Based and:
- Cost effective
- Energy efficient
- Neat and easy to maintain
- No water is required
For lighting CFLs were a transition technology from incandescent bulbs to LEDs. For data centers Cold Plates and In-Row Cooling systems are transition technologies. They are better than air but:
- Not as efficient as total immersion
- Fans are still required
- Heat removal hardware can get in the way of maintenance
- These systems can be expensive
- Rack-Based Single-Phase Immersion Technology is the End Game for Cooling Electronics.
December 20th, 2017
Laboratory Study and Demonstration Results of a Directed-Flow, Liquid Submerged Server for High-Efficiency Data Centers Eric Kozubal National Renewable Energy Laboratory https://www.nrel.gov/docs/fy18osti/70459.pdf
Seagate vet takes helm of VC-backed tech company
Nov 8, 2017
By Katharine Grayson – Senior Reporter, Minneapolis / St. Paul Business Journal
Former Seagate Technology executive Darwin Kauffman has been named CEO of venture-capital-backed tech company LiquidCool Solutions Inc. Kauffman most recently spent five years as vice president of management and senior director of product strategy for data-storage giant Seagate. He succeeds Herb Zien, an early LiquidCool Solutions investor who took over as CEO in 2013. Rochester, Minn.-based LiquidCool keeps servers and other equipment from overheating by submerging components in a soybean-based liquid that keeps them chilled. The company launched a decade go as Hardcore Computer with plans to build high-end computers for gamers. It later got out of that business, citing stiff competition from hardware manufacturers, and began targeting the data center market. The company was one of four businesses picked to participate in Wells Fargo & Co.’s Innovation Incubator (IN2) program for clean-tech startups two years ago. LiquidCool’s investors include Arthur Ventures, which has operations in Fargo, N.D., and Minneapolis, plus Capital Midwest Fund and Minneapolis-based StarTec Investments.
Rochester firm has cool solution.
By Jeff Kiger – July 28th, 2017
Leaders of a Rochester technology firm believe they have “a very big solution to a very big problem.” CEO Herb Zien says LiquidCool Solutions “can pretty much cool electronics of any shape or size” through submerging circuit boards in liquid and eliminating the use of power-hungry fans. U.S. Department of Energy’s National Renewable Energy Laboratory in association with Wells Fargo recently found that LiquidCool’s servers reduce data center power usage by 40 percent compared to using traditional fans and air conditioning. That’s significant, because studies have estimated that data centers account for the 2 percent of all energy use in the U.S. “Fans are terrible. They don’t make sense to cool computers, and they never did,” he said. Improving performance and reducing energy use through immersing circuitry in dielectric fluid has always been at the core of the Rochester company, though the focus has evolved in the past 10 years. LiquidCool Solutions, based at 2717 U.S. 14 West, was founded in 2007 by Chad Attlesey, Daren Klum and Scott Littman under the original name of Hardcore Computing. It was led by CEO Al Berning. The company began with seven patents for its revolutionary technology. It started out by making custom desktop computers for video gamers with aggressive names like Reactor and Detonator, but that really never caught on with customers. Hardcore shifted creating desktop workstations and then servers. The latter attracted interest from data centers. In 2012, the company rebooted and changed its leadership as well as its name. Now LiquidCool is finding success in the much less sexy markets of data centers and rugged machines for for use in extreme heat conditions. The firm has expanded its intellectual portfolio to 30 patents with 16 more pending. A big boost is coming from having its cooling results tested and confirmed by a trusted source like NREL. That happened because LiquidCooling was accepted into Wells Fargo’s Innovation Incubator program to create clean, smart building technologies. Zien sees LiquidCool growing in the coming years as more customers adopt its technology. The Rochester development and prototype lab has 13 on staff. Much of the technology is manufactured nearby at Benchmark Electronics’ facility. “There’s a lot of talent in Rochester. It’s our inclination, for a whole lot of reasons, to stay and grow in Rochester,” he said. LiquidCool is currently closing on $6.5 million in financing, which is hoped to be the final round of external funding it needs. While it’s hard to predict the future, Zien said he could see the Rochester site “doubling” in size of more. “What’s going on in that little shopping mall in Rochester is a very, very big deal,” he said. “We always knew the technology being developed in Rochester was really groundbreaking.”
LiquidCool awarded Green Grid “Contribution Award”
28 November, 2016 | White Paper Editors:
Julius Neudorfer, North American Access Technologies, Inc.
Michael J. Ellsworth, IBM Corporation
Devdatta P. Kulkarni, Intel Corporation
Herb Zien, LiquidCool Solutions, Inc.Contributors:
Larry Vertal, Asetek, Inc.
The Recognition of Contribution is presented to all members who actively contribute to the completion of a work item published by The Green Grid. Each recipient is awarded a certificate acknowledging their participation.
Liquid cooling has been used since the early mainframe days and to cool some supercomputers. More recently, air cooling became the predominant form of cooling for most computing systems. Over the past several years, however, many new liquid cooling technical developments and products have entered the market. This has been driven by several factors, such as the increased demand for greater power density, coupled with higher information technology (IT) performance for high-performance computing (HPC) and some hyper-scale computing, and the overall industry focus on energy efficiency. The Green Grid developed this white paper to provide a high-level overview of IT and facility considerations related to cooling, along with a guide to state-of-the-art liquid cooling technology. It is intended for chief technology officers and IT system architects, as well as data center designers, owners, and operators. The paper defines and clarifies liquid cooling terms, system boundaries, topologies, and heat transfer technologies. Through it, The Green Grid aims to give industry vendors and end-users a cohesive picture of current products and related developments. The paper refers to existing terminology and methodologies from the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Technical Committee 9.9 Liquid Cooling Guidelines (2016). It also includes recently developed liquid cooling technologies that may not be covered by current ASHRAE publications. In this white paper, The Green Grid also examines and defines direct and indirect benefits for ITE systems, as well as factors for connection to existing facility infrastructure or the need for addition of supplemental heat rejection systems. The white paper’s findings will serve as the foundation for a follow-up white paper from The Green Grid, IT Design Enabling Multi-Refresh Liquid Cooling, which will help establish recommendations for standardizing the interfaces of evolving liquid-cooled IT equipment to avoid or minimize facility-side changes.
LiquidCool launches 8kW edge appliance
23 June 2017
By Peter Judge
Micro data centers go well with fluid
LiquidCool Solutions has launched a sealed appliance that provides 8kW of computing power in a compact liquid cooled casing, for data-intensive edge computing applications. LCS Edge uses LiquidCool’s proprietary liquid cooling technology, using off-the-shelf components and needing no air conditioning or raised floors. Units holding four, eight or 16 servers are available.
LCS Edge HPC Anywhere 8kW rack Source: LiquidCool
Where do you want it? “Sending data to the cloud for processing and bouncing information back to the local device is slow and unproductive,” said Herb Zien, CEO, LiquidCool Solutions. “The LCS Edge, which does not require air conditioning and literally can be located anywhere, provides an efficient, cost effective interface between the Internet of Things and the Cloud.” The module gives all the operational and environmental benefits of liquid cooling, but is cheaper than comparable air-cooled servers and, as an appliance, it can be set up simply, :the release says: ”add power and fiber, attach the radiator, and turn it on.” The sealed unit isolates the electronics so the unit can operate in harsh environments, where there might be sand, dust, factory waste or pollution in the air. There are nine standard configurations of LCS Edge, and the appliance can be further customized for different processor, memory, storage and I/O requirements. Liquid cooling uses a fluid to cool systems, instead of a flow of air. This enables IT equipment to be operated at a higher density, which has led to its use in high performance computing. Immersion cooling in particular submerges the whole IT rack, or parts of it in coolant.
LCS edge rack stack 002: LiquidCool
LiquidCool has been in operation for some years, and offers immersion cooling for high density IT components. The system places each server blade in a sealed module, and circulates coolant fluid through the module. It has similarities to the UK’s Iceotope, but circulates the coolant outside the blade, instead of using a secondary water circuit.
Liquid Cooling is in Your Future. Are You Ready?
June 30, 2023
Why Liquid Cooling Is Important
Today we launch a series exploring liquid cooling in data centers – and why now’s the time to be getting ready for the future. You’ve built a stable, effective data center for your operation. It meets your current needs and the future growth that was projected when the facility was designed. Perhaps most importantly, its operational costs are well established and easy to budget. So why do you need to consider major changes to the way the data center operates? Why is liquid cooling on the table?
First, air cooling on its own is no longer sufficient to simply meet your current operational needs. As your IT workload equipment moves to the end of its lifecycle, your replacement hardware is going to be more space efficient. The equipment deployed at the beginning of your last refresh cycle is going to be replaced with hardware that is faster, more effective, and more efficient at doing the necessary work.
This isn’t new. You can walk into a data center built a decade ago and find that a data hall that housed a dozen racks may now only have a single rack that is handling a much higher workload than the original deployment. And the first thought to cross your mind is likely “Why do I need to add more space? Why aren’t we using the free space here?” Quite a few data centers were constructed with the ability to increase the power available within the facility, but far fewer were built to handle the increased heat load brought on by higher rack densities. There’s a good chance that adding additional capacity isn’t possible because you will exceed the capacity of the air-to-air cooling systems built with the data center. And wholesale rip and replace is rarely a practical solution.
Second, environmental issues and a focus on sustainability means that you need to make your facilities more environmentally friendly. Cooling systems that are more efficient reduce energy demands, the number one complaint when evaluating data center changes. Additionally, reducing environmental footprint, be it noise, power, space requirements, or any other factor, must be on the list of data center improvements.
Third, technology will continue to change. Be it HPC, AI, or any new technology that places an increased demand on the IT workload supported by the data center, the demand for a more efficient data center will continue. While some changes can be planned well in advance, others, as highlighted by the rapid demand for supporting AI workloads, will come at a fast and furious pace. And you will need to be able to respond to those demands quickly to maintain business agility.
Why Do I Need To Add Liquid Cooling?
While you may not be looking at making changes to your existing hardware deployments, the availability of liquid cooling options within your data centers will increase their flexibility and capabilities.
Upgrades or additions to your existing data centers should utilize hybrid liquid/air cooling technology. This means that your existing air-cooled operations won’t be impacted, and you’ll have the option in new hardware deployments to determine the most cost effective and solution-focused choices as you place the technology necessary to solve your business problems.
If there is good justification for construction, building new data centers or additions to existing facilities that are completely liquid-cooled can be a smart choice. The wide selection of liquid cooling options available means that you will most likely end up with a hybrid cooling model, but should your IT workload demand it entirely liquid-cooled data halls that serve specific needs are a practical solution to a number of potential issues. In part, this is because liquid cooling isn’t a single solution. There are a number of different liquid cooling solutions that can be deployed in concert with each other or to solve specific point problems.
Is Liquid Cooling Ready to Go Mainstream?
February 13, 2017
By Steve Campbell
Editor’s Note: It’s no secret that heat is a killer of electronics – performance, density, reliability, and energy efficiency all suffer. In this commentary Steve Campbell, co-founder and managing partner at OrionX, a strategy and research firm, contends liquid cooling may be closer than we think to widespread use in HPC and traditional datacenters and no longer just the purview of supercomputers. Many of the benefits are familiar. Campbell argues that improved cooling technology, expanding offerings from vendors, and potentially major energy savings are the drivers of the liquid cooling adoption trend. SC16, he says, may have represented an early tipping point. See if you agree.
Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Most people know that liquid cooling is far more efficient than air-cooling in terms of heat transfer. It is also more economical, reducing the cost of power by as much as 40 percent depending on the installation. Being more efficient with electricity can also reduce carbon footprint and contribute positively to the goals of “greenness” in the data centers, but there are other compelling benefits as well, more on that later. Most HPC users are familiar with the Top500 but may not be as familiar with the Green500, which ranks the Top500 supercomputers in the world by energy efficiency. The focus of performance-at-any-cost computer operations has led to the emergence of supercomputers that consume vast amounts of electrical power and produce so much heat that large cooling facilities must be constructed to ensure proper performance. To address this trend, the Green500 list puts a premium on energy-efficient performance for sustainable supercomputing. The most recent Green500, released during SC16, has several systems in the top 10 using liquid cooling. US data centers consumed about 70 billion kilowatt-hours of electricity in 2014, about two percent of the country’s total energy consumption, according to a 2014 study conducted by the US Department of Energy in collaboration with researchers from Stanford University, Northwestern University, and Carnegie Mellon University. Liquid cooling can reduce electrical usage by as much as 40 percent, which would take a huge bite out of datacenter energy consumption. Liquid cooling can also increase server density. The heat generated from the HPC servers rises up, and in a full rack of servers those at the top will experience temperature increases and ultimately shut down. Consequently, you can’t completely populate the rack with servers all the way to the top; instead, you need additional racks and extra floor space to get to processing power wanted. Liquid cooling eliminates the need for additional racks, creating higher data center server density using less floor space. This need for processing power and capacity has only been increasing in a race to the top. Another liquid cooling benefit is higher-speed processing – CPUs and other components can run at higher speed as they are cooled more efficiently. Also, servers require no fans making them operationally silent. The more servers in a datacenter the more fans are required, and noise levels increase – until it hits a painful point, sometimes literally. Liquid cooling eliminates fans and thus reduces acoustic noise levels. Reliability can also be improved as mechanical and thermal fatigue have been reduced in liquid cooling systems as there are no moving parts, no vibrations from fans for example, and the systems are being cooled more efficiently. The elimination of hot spots and thermal stresses will also lead to improved overall reliability, performance and life.
Liquid Cooling Round-up
Following is round up of vendors demonstrating liquid cooled servers at SC16:
Aquila developed Aquarius water cooled server system offered in an Open Compute Platform (OCP) rack, in partnership with Clustered Systems utilizing their cold plate cooling technology. Aquila has also partnered with Houston based TAS Energy to co-develop an edge data center around the Aquarius platform.
Asetek, a provider of hot water, direct-to-chip liquid cooling technology showcased solutions in use worldwide at HPC users with their OEM partners such as Cray, Fujitsu, Format, and Penguin. Liquid cooling solutions for HPE, NVIDIA, Intel, and others were also on display. Asetek’s direct-to-chip cooling technology is deployed nine installations in the November 2016 Green500 list. The highest ranked is #5 on the list; The University of Regensburg QPACE3 is a joint research project with The University of Wuppertal and Jülich Supercomputing Center. Featuring Asetek liquid cooled Fujitsu PRIMERGY servers, it is one of the first Intel Xeon Phi KNL based HPC clusters in Europe. Ranked #6 on the Green500, Oakforest-PACS is the highest performance supercomputer system in Japan and ranked #6 on the Top500. Fujitsu also deployed HPC clusters with PRIMERGY server nodes at the Joint Center for Advanced High-Performance Computing (JCAHPC) in conjunction with the University of Tokyo and Tsukuba University. Asetek also announced liquid cooling technology is cooling eight installations in the November 2016 edition of the TOP500 list of the fastest supercomputers in the world.
CoolIT Systems is a leader in direct Contact Liquid Cooling energy efficient liquid cooling solutions for the HPC, Cloud and Enterprise markets. CoolIT’s solutions target racks of high-density servers. The technology can be deployed with any server in any rack, according to CoolIT. CoolIT has several OEMs including:
- Hewlett Packard Enterprise Apollo 2000 System
- NEC Blue Marlin
- Dell PowerEdge C6320
- Lenovo NeXtScale product offering
CoolIT Systems have also partnered with STULZ and showcased their Chip-to-Atmosphere concept within a micro datacenter. CoolIT was recently selected by the University of Toronto to provide custom liquid cooling for its new signal processing backend, which will support Canada’s largest radio telescope, the Canadian Hydrogen Intensity Mapping Experiment (CHIME), a joint project between the National Research Council of Canada (NRC) and three major Universities (McGill, Toronto, UBC).
Ebullient has developed a two-phase cooling system for data center servers. Low-pressure fluid, 3M Novec 7000, is pumped through flexible tubing to sealed modules mounted on the processors in each server. The fluid captures heat from the processors and transports it back to a central unit, where it is either rejected outside the facility or reused elsewhere in the facility or in neighboring facilities. Ebullient’s direct-to-chip systems can cool any server, regardless of make or model. Ebullient is an early stage company founded in 2013 based on technology developed University of Wisconsin. The company raised $2.3 million in January 2016
Green Revolution Cooling’s CarnotJet System is a liquid immersion cooling solution for data center servers. Rack-mounted servers from any OEM vendor can be installed in special racks filled with a dielectric mineral oil. On show at their SC16 booth was the Minimus server, their own design to further cost reduce the server component of the overall system. In December Green Revolution announced a strategic partnership with Heat Transfer Solutions (HTS), an independent HVAC manufacturers’ representative in North America. As part of the partnership, HTS is making a financial investment in GRC, which will provide growth capital as the company continues to expand its presence in the data center market. In addition, a new CEO was appointed to help grow the company.
LiquidCool Solutions is a technology development firm specializing in cooling electronics by total immersion in their own proprietary dielectric fluid. LiquidCool Solutions was originally founded in 2006 as Hardcore Computing with a focus on workstations, rebranding in 2012 to LiquidCool Solutions and its focus on servers. The company has demonstrated two new liquid submerged servers based on the Clamshell design. The Submerged Cloud Server, a 2U 4-node server designed for Cloud-computing applications and Submerged GPU Server, is a 2U dual node server designed for HPC applications that can be equipped with four GPU cards or four Xeon Phi boards.
LiquidMips showcased a server-cooling concept, a single processor chip immersed in 3M Fluorinert. It’s a long way from being a commercially viable product but represents another company entering the immersive cooling market.
Inspur Systems Inc., part of Inspur Group, showed two types of cooling solutions at SC16, a phase changing cooling solution with ultra-high thermal capacity, and a direct contact liquid cooling solution which allows users to maximize performance and lower operating expenses.
Allied Control specializes in 2-phase immersion cooling solutions for HPC applications. Having built the world’s largest 40MW immersion cooled data center with 252kW per single rack resulting in 34.7kW/sqm or 3.2kW/sqft incl. white space, Allied Control offers performance centric solutions for ultra-high density HPC applications. Allied Control utilizes the 3M Novec dielectric fluid. The BitFury Group (Bitcoin mining giant) acquired Allied Control in 2015. In January 2017 BitFury Group announced a deal Credit China Fintech Holdings to set up a joint venture that will focus on promoting the technology in China. As part of the deal, Credit China Fintech will invest $30 million in BitFury and the setting up of the joint venture that will sell BitFury’s bitcoin mining equipment.
ExaScaler Inc. is specialized in submersion liquid cooling technology. ExaScaler, and its sister company PEZY Computing, unveiled ZettaScaler-1.8, the first Super Computer with a performance density of 1.5 PetaFLOPS/m. The ZettaScaler-1.8 is an advanced prototype of the ZettaScaler-2.0 due to be released in 2017 with a performance density three times higher than the ZettaScaler-1.8. ExaScaler immersion liquid cooling using 3M Fluorinert cools ZettaScaler-1.8 Super Computer.
Fujitsu demonstrated a new form of data center, which included cloud-based servers, storage, network switch and center facilities, by combining the liquid immersion cooling technology for supercomputers developed by ExaScaler Inc. with Fujitsu’s know-how on general-purpose computers. Fujitsu is able to capitalize on three decades of liquid cooling expertise with mainframes, to supercomputers to Intel x86. This new style of data center uses liquid immersion cooling technology that completely immerses IT systems such as servers, storage, and networking equipment in liquid coolant in order to cool the devices. The liquid immersion cooling technology uses 3M’s Fluorinert, an inert fluid that provides high heat-transfer efficiency and insulation as a coolant. IT devices, including servers and storage, are totally submerged in a dedicated reservoir tank filled with liquid Fluorinert, and the heat generated from the devices is processed by circulating the cooled liquid through the devices. This improves the efficiency of the entire cooling system, thereby significantly reducing power consumption. A further benefit of the immersed cooling is that it provides protection from harsh environmental elements, such as corrosion, contamination, and pollution.
3M’s HPC offers solutions using 3M Engineered Fluids such as Novec or Fluorinert. Perhaps the winner at SC16 for immersed cooling is 3M as most of the vendors mentioned here use 3M Engineering Fluids. 3M fluids also featured in some of the networking products at the event. Fully immersed systems can improve energy efficiency, allow for significantly greater computing density, and help minimize thermal limitations during design.
Huawei announced a next-generation FusionServer X6000 HPC server that uses a liquid cooling solution featuring a skive fin micro-channel heat sink for CPU heat dissipation and processing technology where water flows through memory modules. This modular board design and 50ºC warm water cooling offers high energy-efficiency and reduces total cost of ownership (TCO).
HPE and Dell both introduced liquid cooling server products in 2016. Though they do not have the lineage of Fujitsu they nevertheless recognize the values liquid cooling delivers to the datacenter.
HPE’s entrance is the Apollo family of high-density servers. These rack-based solutions include compute, storage, networking, power and cooling. Target users are high-performance computing workloads and big data analytics. The top of the server lineup is the Apollo 8000 uses a warm water-cooling system whereas other members of the Apollo family of servers integrate the CoolIT Systems Closed-Loop DCLC (Direct Contact Liquid Cooling).
Dell, like HPE, does not have the decades of liquid cooling expertise of Fujitsu. Dell took the covers of the Dell Triton water cooling system in mid 2016. Dell’s Extreme Scale Infrastructure team built Triton as a proof of concept for eBay, leveraging Dell’s rack-scale infrastructure. The liquid-cooled cold plates directly contact the CPUs and incorporates liquid to air heat exchanges to cool the airborne heat generated by the large number of densely packed processor nodes.
Can we add liquid cooling to existing servers?
Good question, and the answer is no you really cannot. Adopting liquid cooling only makes sense on new server deployments. That is not to say it is impossible, but there are lots of modifications needed to make water cooling, like direct-to-chip, or fully immersed work, big maybe and not really recommended. An existing server has cooling fans that need to be disabled and CPU cooling towers removed and so on. You also need to add plumbing to your existing rack, which can be a pain. There is no question that a prospective user needs to consider the impact and requirements on existing datacenter infrastructure, the physical bricks, mortar, plumbing, etc. For users considering water-cooled solutions you will need to plumb water to the server rack. If you are in a new datacenter that is one level of effort but if your datacenter is a large closet in an older building, like 43 percent of North American datacenter/server rooms, it may be a lot more difficult and expensive. If you are considering a fully immersed solution, such as Fujitsu, no plumbing is required; all you need to do is hook up to chiller. It may be easier and less expensive than water-cooling. As a completely sealed unit it is conceivable that liquid immersed cooling solutions can be deployed almost anywhere, no datacenter required. Most vendors covered in this market are small emerging technology companies. Asetek’s data center revenue was $1.8 million in the third quarter and $3.6 million in the first nine months of 2016, compared with $0.5 million and $1.0 million in the third quarter and first nine months of 2015, respectively. Asetek is forecasting significant data center revenue growth in 2016 from $1.9M in 2015. CoolIT reported 2014 revenue of $27 million for all product categories. It is worth noting that Asetek and CoolIT data center revenues are less than 10% of total company revenue. The remaining 90% is workstation and PC liquid cooling solutions. Ebullient, Liquid MIPS, LiquidCooling, Green Revolution and Aquila have very few customers and probably below $10M annual revenues. The obvious question is since most of the vendors are small and very early stage – is there truly a market for liquid cooled servers? Industry analysts believe there is and forecast the market to grow from about $110 million in 2015 to almost $960 million in 2020, an additional $850 million of incremental revenue in just five years. With healthy future growth prospects, we’ve started to see larger players enter this market such as Fujitsu. In addition, the HPC system vendors are all OEMing liquid cooling technology solutions to solve the big system cooling issues in the datacenters. With the huge increase in data being generated, artificial intelligence and other applications need to mine this data. Consequently, more and more server power is required and new innovative cooling solutions needed making liquid cooling a practical and feasible solution. As a side note, more and more government RFPs are asking for liquid cooling solutions. Solutions such as the one from Fujitsu can make the crossover from HPC to commercial datacenter a reality. Could 2017 the breakout year for liquid cooling, move from innovator to early adopter? The Supercomputing Conference is frequently a window into the future. At SC16, there were over a dozen companies demonstrating server liquid cooling solutions, with technologies ranging from Direct-to-Chip to Liquid Immersive Cooling where servers and storage are fully immersed in dielectric fluid. Today the majority of providers are early stage or startup companies with a notable exception. Fujitsu, a global IT powerhouse brought over thirty years of liquid cooling experience and demonstrated an immersive cooling solution that had Intel-based servers, storage and network switches fully immersed in Fluorinert. We will see cooling technology move from the confines of high-end supercomputers to a nice niche in the enterprise datacenter for such workloads as big data analytics, AI and high frequency trading.
Data Center Liquid Immersion Cooling Market by Type, Component and Geography – Forecast and Analysis 2023-2027
The data center liquid immersion cooling market is estimated to grow at a CAGR of 22.77% between 2022 and 2027. The size of the market is forecast to increase by USD 537.54 million. The growth of the market depends on several factors, including the increase in the construction of data centers, the reduction in power consumption by data centers, and the inclination toward data center liquid immersion cooling owing to the rise in water scarcity.
This report extensively covers market segmentation by type (large data centers, small, and mid-sized data centers), component (solution and services), and geography (North America, Europe, APAC, South America, and Middle East and Africa). It also includes an in-depth analysis of drivers, trends, and challenges. Furthermore, the report includes historic market data from 2017 to 2021.
What will be the Size of the Data Center Liquid Immersion Cooling Market During the Forecast Period?
Key Data Center Liquid Immersion Cooling Market Driver
The increase in the construction of data centers is the key factor driving the global data center liquid immersion cooling market growth. Data centers have become an integral part of every organization. The massive growth in the amount of data being generated has compelled several companies to build data centers of their own. The increasing interest in cloud computing will further drive the need for data centers. With the growing demand for data centers, the need for data center liquid immersion cooling is increasing.
Moreover, in terms of countries, data center investments remain high in the US, the UK, and China when compared with other countries. During the forecast period, significant investment is expected in megaprojects, such as Microsoft’s and AWS’ data centers in France, Facebook’s investments in New Mexico, Apple’s investments in Ireland, and Google’s expected operations in eight new regions across the world. Hence, several investments in the construction of data centers will drive the need for data center liquid immersion cooling solutions.
Key Data Center Liquid Immersion Cooling Market Trend
The growing need to reduce carbon footprint is the primary trend in the global data center liquid immersion cooling market. In a data center environment, the operation of components such as IT servers, generators, and building shell emits carbon dioxide. Carbon emissions can be determined by the amount of power consumed by these facilities. In a year, data centers consume 2.5% to 4.5% of the power generated globally, while they emit 1.5% to 2.5% of greenhouse gas. The demand for the data center is growing at a significant phase. Many large organizations are forced to develop new data centers to power their business efficiently.
The installation of efficient liquid immersion will aid in reducing power consumption as well as carbon emissions in the facilities. The PUE generated by CarnotJet System is significantly lower than the air-cooling data centers, thereby helping in reducing carbon emission. The direct reduction in carbon emissions associated with cooling energy reduction also helps enterprises achieve better scores on the Carbon Disclosure Project (CDP) and reach carbon reduction goals. Therefore, as the need to reduce carbon footprint is increasing, it is expected that the global market in focus will witness high growth during the forecast period.
Key Data Center Liquid Immersion Cooling Market Challenge
The availability of alternative cooling methods is a major challenge to the global data center liquid immersion cooling market growth. The power consumption by data centers is increasing worldwide. This scenario has made data center operators look for an alternative solution that is efficient in terms of power consumption and performance. Vendors in the data center cooling market are moving from air-based cooling to chilled water-based cooling solutions. Both solutions are predominantly used in data centers worldwide. However, the liquid immersion cooling solution is hugely adopted by scientific data centers, wherein the computing requirement is usually more than twice of a large data center environment.
Currently, air-based cooling solutions through airside economizer and waterside economizer are effective and are used to operate a data center at a PUE of 1.1, with reduced energy consumption using a renewable source. However, implementing air-based cooling involves higher CAPEX than an immersion solution, and it is not suitable for every environmental condition. Due to all such factors, global market growth may retrain during the forecast period.
North America is projected to contribute 38% by 2027. Technavio’s analysts have elaborately explained the regional trends and drivers that shape the market during the forecast period. The data center liquid immersion cooling market in North America is expected to grow during the forecast period as vendors and distributors are increasingly entering into strategic partnerships to offer efficient data center liquid immersion cooling solutions. Data center liquid immersion cooling vendors are continuously focusing on strategic alliances to expand their product portfolio and gain a competitive edge over each other, as well as to improve their market reach and customer base. Intense competition in the market also compels established vendors to grow their market presence through strategic alliances. With the increasing number of strategic alliances, awareness about the availability of data center liquid immersion cooling solutions and their adoption will increase during the forecast period.
This report forecasts the contribution of all the segments to the growth of the market. In addition, we have included the COVID-19 impact and the recovery strategies for each segment. COVID-19 led to an upsurge in the demand for North America. However, the lockdown restrictions were relaxed in the second half of 2020 owing to large-scale vaccination drives across North America. Since the first half of 2021, the removal of lockdown restrictions resumed operations in various industries, such as BFSI, IT, manufacturing, and others. This is expected to recover the demand for data center infrastructure during the forecast period. Moreover, the increase in the adoption of cloud services and digitization led to the emergence of a new business environment and data.
by: Herb Zien – CEO of LiquidCool Solutions
Most data centers in operation today defy logic. They are cooled by circulating conditioned air around the data processing room and through the racks. Separate hot and cold aisles are maintained in an attempt to conserve energy. In most installations, cold air is forced up through holes in the floor. And humidity control is necessary to avoid condensation on IT equipment if too high or electrostatic discharge if too low. Air-cooled data centers are expensive to build and operate. Up to 15% of the total power supplied to a data center can be used to circulate air, and another 15% is used by rack and blade fans. Not only are fans inefficient, they fail. Fan cooling also limits power density, which is critical to reducing the white-space footprint as well as maintenance and infrastructure costs. Cooling with air creates problems beyond wasting energy and space. Contact between air and electronics leads to oxidation and tin whiskers. Pollutants in the air cause additional damage. Filters clog, resulting in overheating. Fans transmit vibrations that loosen solder joints, and they generate heat that must be dissipated. Many data centers operate at excessive noise levels from the fans, and OSHA regulations require earplugs. It gets even worse. Raising the temperature in a data center to reduce the need for mechanical refrigeration causes fans in the central air-handling system, CRAC units and device chassis to spin faster to move more air. Fan energy increases as the cube of the volume of air circulated, which means doubling the airflow requires eight times more energy. All of these problems can be avoided through liquid-cooled data centers. It’s simple physics. Liquids cool electronics 1,000 times more effectively than air. Air is an insulator with negligible heat capacity or thermal mass. Warm air rises and cold air sinks, so if a data center has a raised floor and cold air is blown uphill, energy is unnecessarily being wasted to fight gravity. Ironically some of the earliest computer installations were liquid cooled, but the technology available then was expensive, messy, difficult to maintain and inconvenient, and water leaks had the potential to be catastrophic. Air conditioning for employee comfort was already installed in the building, so the simplest thing to do was expand the AC system to pick up the additional cooling load of the server rooms. Rather than isolating and solving the data center cooling problem, a bandage was applied—an easy fix. A lot has changed in the past few years. Energy waste and carbon footprints have become high-visibility issues. Rack power densities have increased, in some cases to the point where air cooling is bumping against thermodynamic limits. The bandage is becoming unstuck. Importantly, some liquid cooling technologies available now overcome the perceptions that carried over from the old days. Liquid-cooled IT devices can be neat, easy to maintain, scalable and inexpensive. In some cases it is possible to commercially recycle much of the input energy to heat buildings or domestic hot water, cutting the carbon footprint even further. Three technologies have emerged to cool electronic equipment with liquids: cold plates, in-row cooling and immersion in a dielectric fluid. Cold plates, originally designed to enable gamers to overclock their machines, target the hottest or highest-power-density components in servers, namely the processors. Device fans, facility fans and other infrastructure are still required to cool other components that are not covered by cold plates. Additionally, cold plates are an ineffective way to cool switches, which lack point sources of heat. Cooling efficiency for cold-plate systems can be 50% better than air. In-row cooling is essentially an attempt to make the room around the IT equipment smaller. This technology can reduce cooling energy by 60% compared with air, but it still requires all the elements of a complete data center air-conditioning system. Immersive cooling means that electronics are totally immersed in a nonconducting dielectric fluid, thereby decoupling electronics from the room and eliminating fans. A closed cycle dissipates heat. Some direct-contact systems are single phase, where the dielectric fluid remains a liquid throughout the heat-dissipation cycle. Others use a two-phase system in which the fluid boils and then condenses. Cooling efficiency for an immersive system can be more than 90% better than air. If an organization is considering liquid cooling to address capital cost, operating cost, space, reliability, noise or carbon-footprint problems, immersive-cooling systems are a logical choice. A number of technologies are commercially available, and the devil is in the details, but immersing electronics in a dielectric fluid instead of an air bath offers significant benefits:
– The highest possible thermal efficiency
– No rack or chassis fans to fail
– No oxidation or corrosion of electrical contacts
– Reduction in the thermal fluctuations that drive solder-joint failures
– Much lower operating temperatures for the board and components
– No exposure to electrostatic-discharge events
– No fretting corrosion of electrical contacts induced by structural vibration caused by chassis fans
– No sensitivity to ambient particulate, humidity or temperature conditions
– Waste energy can be recaptured in a form convenient for recycling
In addition to the obvious space and power benefits, immersive cooling eliminates the need to purchase, install and maintain chillers, room air handlers, humidity-control systems, water-treatment equipment and air-filtration equipment. It is curious that, considering its obvious advantages, immersive cooling is only now beginning to get market traction. The status quo has a lot of inertia, but it’s not just about power density. Steve Jobs summed it up best: “It takes a lot of hard work to make something simple, to truly understand the underlying challenges and come up with elegant solutions.” Liquid cooling, cleverly executed, can be an elegant solution to reducing data center energy waste, water usage, carbon footprint and cost. The brass-era generation did not trade up from a horse and carriage to a horseless carriage to go 30 miles per hour; they did it to get rid of the horse! The horse used far too much energy, took far more space and polluted the environment. Fans are the horses of the digital age, and immersive cooling is the only certain way to completely eliminate fans.