From the Far Edge to the Edge Presentation
April 20th, 2019
– No air conditioning is required
– No water is consumed
– Electronics completely isolated from ambient air
– The power needed to cool data centers is reduced by up to 98%
– Waste heat from servers can be recovered and reused
– Data centers run silently
– Dielectric heat transfer fluid is never replaced and has a GWP of zero
– Causes of server failures are reduced and enclosures can be reused
– E-waste is cut significantly
– Scalable in form factor from the Far Edge, to the Aggregated Edge, to the Regional Data Center to the Central Data Center
Cooling Data Centers – The End Game by Herb Zien
An Analogy – Lighting
Incandescent lighting was the standard for 100 years. This technology is so inefficient that scientists spent decades searching for an alternative, and since the 1980s most engineers assumed that Light Emitting Diodes would be the end game. However, LED technology was not ready for commercialization because bulbs could not meet market requirements in terms of color and cost. An entire industry, Compact Fluorescent Lighting, sprang up to transition from legacy to LED. Patents were filed, factories built, and distribution channels created, but all of this was temporary. When was the last time you bought a CFL bulb?
Legacy Technology for Cooling Data Centers
Immersing electronics in air is the legacy technology for data center cooling. Like incandescent bulbs, this is extraordinarily inefficient because air does not conduct heat. Massive fan power is required to overcome the natural inclination of air to keep heat moving from where it is generated to where it needs to be rejected. In addition, air conditioning, either direct expansion or evaporative cooling, usually is required to promote heat transfer. As a result, half the energy consumed by most data centers is wasted cooling the electronics.
Transitional Liquid Cooling Technologies
Transition liquid cooling technologies sprung up to partially address air-cooling deficiencies, Cold Plates and In-Row Cooling. Cold Plates, originally developed to cool high-power processors in computer gaming products, are being modified for data center applications. Standard heat spreaders mounted on processors are replaced with heat exchangers that transfer heat from the processor to a fluid, usually water, which is pumped to a second exchanger that rejects the heat to air. Cold Plates are often referred to as “direct contact” systems, but there is no direct contact; only the hottest electronic components are covered by Cold Plates and there is a heat transfer barrier between the processor and transfer fluid. Furthermore, only about half the heat generated in a server is removed by the fluid; the rest is blown into the room by fans. In-Row Cooling technology makes the room smaller. In legacy data centers a huge volume of air is cooled to remove heat generated in racks that occupy only about half of the floor space. In-Row Cooling, essentially mini-HVAC systems comprising fans and water-to-air heat exchangers, are installed in the racks to reduce the volume of air that must be cooled. This technology removes heat but is expensive and introduces water to the data center.
Total Immersion, where all heat generating components are immersed in a heat-conductive but electrically non-conductive medium, is a more energy-efficient solution than Cold Plates or In-Row Cooling systems, but most approaches are not ready for prime time. There are two forms of Total Immersion cooling, Two-Phase and Single-Phase. Two-Phase Immersion systems are Tank-Based and electronic components are submerged in a refrigerant bath. Boiling occurs on the surface of heat generating components and vapor passively rises to the top of the enclosure, where it condenses on water-cooled coils and falls back into the tank. This technology has major drawbacks which include:
- The dielectric refrigerant is very expensive
- The refrigerant is highly volatile, and evaporates when the lid is opened for maintenance
- Cavitation associated with boiling can erode electronic boards
- Some refrigerants are toxic
- Water is required to condense the refrigerant vapor
Single-Phase Immersion systems can be either Tank-Based or Rack-Based.
Tank-Based systems resemble a rack tipped over on its back, with modified servers inserted vertically into slots in the tank. Drawbacks include:
- Scalability because the tank takes a lot of floor space
- Weight because there is a large volume of fluid
- Messiness because access to electronic components requires aprons and gloves
- Some systems use a mineral oil dielectric, which often has impurities
- All cooling is by bulk flow, which reduces efficiency
- Water is required to cool the dielectric
Rack-Based Single-Phase Total Immersion – The Final Answer
Rack-Based Single-Phase Total Immersion is the ultimate solution for cooling data centers. This technology, perfected by LiquidCool Solutions (LCS), overcomes negative perceptions the market may have had relating to immersion cooling. Scalable in data center applications LCS-cooled hardware is Rack-Based and:
- Cost effective
- Energy efficient
- Neat and easy to maintain
- No water is required
For lighting CFLs were a transition technology from incandescent bulbs to LEDs. For data centers Cold Plates and In-Row Cooling systems are transition technologies. They are better than air but:
- Not as efficient as total immersion
- Fans are still required
- Heat removal hardware can get in the way of maintenance
- These systems can be expensive
- Rack-Based Single-Phase Immersion Technology is the End Game for Cooling Electronics.
Innovation Incubator: LiquidCool Solutions Technical Evaluation
December 20th, 2017
Laboratory Study and Demonstration Results of a Directed-Flow, Liquid Submerged Server for High-Efficiency Data Centers Eric Kozubal National Renewable Energy Laboratory https://www.nrel.gov/docs/fy18osti/70459.pdf
Seagate vet takes helm of VC-backed tech company
By Katharine Grayson – Senior Reporter, Minneapolis / St. Paul Business Journal Nov 8, 2017, 2:18pm
Former Seagate Technology executive Darwin Kauffman has been named CEO of venture-capital-backed tech company LiquidCool Solutions Inc. Kauffman most recently spent five years as vice president of management and senior director of product strategy for data-storage giant Seagate. He succeeds Herb Zien, an early LiquidCool Solutions investor who took over as CEO in 2013. Rochester, Minn.-based LiquidCool keeps servers and other equipment from overheating by submerging components in a soybean-based liquid that keeps them chilled. The company launched a decade go as Hardcore Computer with plans to build high-end computers for gamers. It later got out of that business, citing stiff competition from hardware manufacturers, and began targeting the data center market. The company was one of four businesses picked to participate in Wells Fargo & Co.’s Innovation Incubator (IN2) program for clean-tech startups two years ago. LiquidCool’s investors include Arthur Ventures, which has operations in Fargo, N.D., and Minneapolis, plus Capital Midwest Fund and Minneapolis-based StarTec Investments.
Rochester firm has cool solution
Jeff Kiger – July 28th, 2017
Leaders of a Rochester technology firm believe they have “a very big solution to a very big problem.” CEO Herb Zien says LiquidCool Solutions “can pretty much cool electronics of any shape or size” through submerging circuit boards in liquid and eliminating the use of power-hungry fans. U.S. Department of Energy’s National Renewable Energy Laboratory in association with Wells Fargo recently found that LiquidCool’s servers reduce data center power usage by 40 percent compared to using traditional fans and air conditioning. That’s significant, because studies have estimated that data centers account for the 2 percent of all energy use in the U.S. “Fans are terrible. They don’t make sense to cool computers, and they never did,” he said. Improving performance and reducing energy use through immersing circuitry in dielectric fluid has always been at the core of the Rochester company, though the focus has evolved in the past 10 years. LiquidCool Solutions, based at 2717 U.S. 14 West, was founded in 2007 by Chad Attlesey, Daren Klum and Scott Littman under the original name of Hardcore Computing. It was led by CEO Al Berning. The company began with seven patents for its revolutionary technology. It started out by making custom desktop computers for video gamers with aggressive names like Reactor and Detonator, but that really never caught on with customers. Hardcore shifted creating desktop workstations and then servers. The latter attracted interest from data centers. In 2012, the company rebooted and changed its leadership as well as its name. Now LiquidCool is finding success in the much less sexy markets of data centers and rugged machines for for use in extreme heat conditions. The firm has expanded its intellectual portfolio to 30 patents with 16 more pending. A big boost is coming from having its cooling results tested and confirmed by a trusted source like NREL. That happened because LiquidCooling was accepted into Wells Fargo’s Innovation Incubator program to create clean, smart building technologies. Zien sees LiquidCool growing in the coming years as more customers adopt its technology. The Rochester development and prototype lab has 13 on staff. Much of the technology is manufactured nearby at Benchmark Electronics’ facility. “There’s a lot of talent in Rochester. It’s our inclination, for a whole lot of reasons, to stay and grow in Rochester,” he said. LiquidCool is currently closing on $6.5 million in financing, which is hoped to be the final round of of external funding it needs. While its hard to predict the future, Zien said he could see the Rochester site “doubling” in size of more. “What’s going on in that little shopping mall in Rochester is a very, very big deal,” he said. “We always knew the technology being developed in Rochester was really groundbreaking.”
LiquidCool awarded Green Grid “Contribution Award”
The Recognition of Contribution is presented to all members who actively contribute to the completion of a work item published by The Green Grid. Each recipient is awarded a certificate acknowledging their participation.
28 November, 2016 | White Paper Editors:
Julius Neudorfer, North American Access Technologies, Inc.
Michael J. Ellsworth, IBM Corporation
Devdatta P. Kulkarni, Intel Corporation
Herb Zien, LiquidCool Solutions, Inc.Contributors:
Larry Vertal, Asetek, Inc.
Liquid cooling has been used since the early mainframe days and to cool some supercomputers. More recently, air cooling became the predominant form of cooling for most computing systems. Over the past several years, however, many new liquid cooling technical developments and products have entered the market. This has been driven by several factors, such as the increased demand for greater power density, coupled with higher information technology (IT) performance for high-performance computing (HPC) and some hyper-scale computing, and the overall industry focus on energy efficiency. The Green Grid developed this white paper to provide a high-level overview of IT and facility considerations related to cooling, along with a guide to state-of-the-art liquid cooling technology. It is intended for chief technology officers and IT system architects, as well as data center designers, owners, and operators. The paper defines and clarifies liquid cooling terms, system boundaries, topologies, and heat transfer technologies. Through it, The Green Grid aims to give industry vendors and end-users a cohesive picture of current products and related developments. The paper refers to existing terminology and methodologies from the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Technical Committee 9.9 Liquid Cooling Guidelines (2016). It also includes recently developed liquid cooling technologies that may not be covered by current ASHRAE publications. In this white paper, The Green Grid also examines and defines direct and indirect benefits for ITE systems, as well as factors for connection to existing facility infrastructure or the need for addition of supplemental heat rejection systems. The white paper’s findings will serve as the foundation for a follow-up white paper from The Green Grid, IT Design Enabling Multi-Refresh Liquid Cooling, which will help establish recommendations for standardizing the interfaces of evolving liquid-cooled IT equipment to avoid or minimize facility-side changes.
LiquidCool launches 8kW edge appliance
23 June 2017 By Peter Judge
Micro data centers go well with fluid
LiquidCool Solutions has launched a sealed appliance that provides 8kW of computing power in a compact liquid cooled casing, for data-intensive edge computing applications. LCS Edge uses LiquidCool’s proprietary liquid cooling technology, using off-the-shelf components and needing no air conditioning or raised floors. Units holding four, eight or 16 servers are available.
LCS Edge HPC Anywhere 8kW rack Source: LiquidCool
Where do you want it? “Sending data to the cloud for processing and bouncing information back to the local device is slow and unproductive,” said Herb Zien, CEO, LiquidCool Solutions. “The LCS Edge, which does not require air conditioning and literally can be located anywhere, provides an efficient, cost effective interface between the Internet of Things and the Cloud.” The module gives all the operational and environmental benefits of liquid cooling, but is cheaper than comparable air-cooled servers and, as an appliance, it can be set up simply, :the release says: ”add power and fiber, attach the radiator, and turn it on.” The sealed unit isolates the electronics so the unit can operate in harsh environments, where there might be sand, dust, factory waste or pollution in the air. There are nine standard configurations of LCS Edge, and the appliance can be further customized for different processor, memory, storage and I/O requirements. Liquid cooling uses a fluid to cool systems, instead of a flow of air. This enables IT equipment to be operated at a higher density, which has led to its use in high performance computing. Immersion cooling in particular submerges the whole IT rack, or parts of it in coolant.
LCS edge rack stack 002: LiquidCool
LiquidCool has been in operation for some years, and offers immersion cooling for high density IT components. The system places each server blade in a sealed module, and circulates coolant fluid through the module. It has similarities to the UK’s Iceotope, but circulates the coolant outside the blade, instead of using a secondary water circuit.
Data Center Liquid Immersion Cooling 2017 Global Market Expected to Grow at CAGR 54.3% And Forecast To 2020
By: Press Release Distribution ServiceMarch 08, 2017 at 00:02 AM ESTWiseguyreports.Com added New Market Research Report On -“Global Data Center Liquid Immersion Cooling Market 2017 Manufacturers Analysis, Opportunities and Growth Forecast To 2020”.Pune, India – March 8, 2017 /MarketersMedia/ —
Global Data Center Liquid Immersion Cooling Market
Liquid immersion cooling infers to the dissipation of heat generated in hardware using a thermally conductive dielectric liquid. Liquid immersion cooling is used at data centers, mainframes, and desktop computers. Advantages of this cooling type include up to 99 percent less energy consumption & lower maintenance costs as compared to other cooling types, and noise-free operation. The global data center liquid immersion cooling market is estimated to grow from USD 109.51 Million in 2015 to USD 959.62 Million in 2020 at a CAGR of 54.3%. This market is driven by increasing demand for data centers, instigated by rapid adoption of advanced technologies such as cloud-based services and big data analytics for operational business needs across the globe.
The key players of Global data center liquid immersion cooling market included in the report are 3M Co. (U.S.), Fujitsu Ltd (Japan), Iceotope Research and Development Limited (UK), CoolIT Systems, Inc. (Canada), LiquidCool Solutions, Inc. (U.S.), Allied Control (U.S.), Asetek (U.S.), Midas Green Technologies LLC (U.S.), Ebullient, Inc. (U.S.), Green Revolution Cooling (U.S.), and Rittal GmbH & Co. Kg (Germany).
• To provide detailed analysis of the market structure along with forecast for the next five years
• To identify upcoming technologies, high growth geographies, and countries were identified
• Regional & country specific demand and forecast for the market
Get Sample Report @ https://www.wiseguyreports.com/sample-request/868029-global-data-center-liquid-immersion-cooling-market-analysis-forecast-2016-to-2020
• Liquid immersion cooling technology/solution providers
• Hardware manufacturers
• Resellers and distribution
• Data center providers
• Americas market for liquid immersion cooling was valued at USD 51.13 million in 2015, and is expected to reach USD 494.86 million by 2020
• Asia-Pacific market is expected to grow at a rate of 45.1% CAGR during the forecast period
Regional and Country Analysis
Americas region dominated the data center liquid immersion cooling market in 2015, and was valued at USD 51.13 million the same year. It is expected to witness a rapid growth of 57.45% CAGR during the forecast period to reach USD 494.86 million by 2020. Asia-Pacific market is also expected to grow at a fast pace during the forecast period. Its expected growth of 45.1% CAGR is attributed to growing investments towards new technology developments as well as positive government initiatives for the data center market. The reports also cover country level analysis:
• Europe, Middle East & Africa
o Rest of the Europe
• Asia – Pacific
o South Korea
o Rest of Asia-Pacific
• Rest of the World
Complete Report Details @ https://www.wiseguyreports.com/reports/868029-global-data-center-liquid-immersion-cooling-market-analysis-forecast-2016-to-2020
Is Liquid Cooling Ready to Go Mainstream?
By Steve Campbell
February 13, 2017
Editor’s Note: It’s no secret that heat is a killer of electronics – performance, density, reliability, and energy efficiency all suffer. In this commentary Steve Campbell, co-founder and managing partner at OrionX, a strategy and research firm, contends liquid cooling may be closer than we think to widespread use in HPC and traditional datacenters and no longer just the purview of supercomputers. Many of the benefits are familiar. Campbell argues that improved cooling technology, expanding offerings from vendors, and potentially major energy savings are the drivers of the liquid cooling adoption trend. SC16, he says, may have represented an early tipping point. See if you agree.
Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Most people know that liquid cooling is far more efficient than air-cooling in terms of heat transfer. It is also more economical, reducing the cost of power by as much as 40 percent depending on the installation. Being more efficient with electricity can also reduce carbon footprint and contribute positively to the goals of “greenness” in the data centers, but there are other compelling benefits as well, more on that later. Most HPC users are familiar with the Top500 but may not be as familiar with the Green500, which ranks the Top500 supercomputers in the world by energy efficiency. The focus of performance-at-any-cost computer operations has led to the emergence of supercomputers that consume vast amounts of electrical power and produce so much heat that large cooling facilities must be constructed to ensure proper performance. To address this trend, the Green500 list puts a premium on energy-efficient performance for sustainable supercomputing. The most recent Green500, released during SC16, has several systems in the top 10 using liquid cooling.
US data centers consumed about 70 billion kilowatt-hours of electricity in 2014, about two percent of the country’s total energy consumption, according to a 2014 study conducted by the US Department of Energy in collaboration with researchers from Stanford University, Northwestern University, and Carnegie Mellon University. Liquid cooling can reduce electrical usage by as much as 40 percent, which would take a huge bite out of datacenter energy consumption. Liquid cooling can also increase server density. The heat generated from the HPC servers rises up, and in a full rack of servers those at the top will experience temperature increases and ultimately shut down. Consequently, you can’t completely populate the rack with servers all the way to the top; instead you need additional racks and extra floor space to get to processing power wanted. Liquid cooling eliminates the need for additional racks, creating higher data center server density using less floor space. This need for processing power and capacity has only been increasing in a race to the top. Another liquid cooling benefit is higher-speed processing – CPUs and other components can run at higher speed as they are cooled more efficiently. Also, servers require no fans making them operationally silent. The more servers in a datacenter the more fans are required and noise levels increase – until it hits a painful point, sometimes literally. Liquid cooling eliminates fans and thus reduces acoustic noise levels. Reliability can also be improved as mechanical and thermal fatigue have been reduced in liquid cooling systems as there are no moving parts, no vibrations from fans for example, and the systems are being cooled more efficiently. The elimination of hot spots and thermal stresses will also lead to improved overall reliability, performance and life.
Liquid Cooling Round-up
Following is round up of vendors demonstrating liquid cooled servers at SC16:
Aquila developed Aquarius water cooled server system offered in an Open Compute Platform (OCP) rack, in partnership with Clustered Systems utilizing their cold plate cooling technology. Aquila has also partnered with Houston based TAS Energy to co-develop an edge data center around the Aquarius platform.
Asetek, a provider of hot water, direct-to-chip liquid cooling technology showcased solutions in use worldwide at HPC users with their OEM partners such as Cray, Fujitsu, Format, and Penguin. Liquid cooling solutions for HPE, NVIDIA, Intel, and others were also on display. Asetek’s direct-to-chip cooling technology is deployed nine installations in the November 2016 Green500 list. The highest ranked is #5 on the list; The University of Regensburg QPACE3 is a joint research project with The University of Wuppertal and Jülich Supercomputing Center. Featuring Asetek liquid cooled Fujitsu PRIMERGY servers, it is one of the first Intel Xeon Phi KNL based HPC clusters in Europe. Ranked #6 on the Green500, Oakforest-PACS is the highest performance supercomputer system in Japan and ranked #6 on the Top500. Fujitsu also deployed HPC clusters with PRIMERGY server nodes at the Joint Center for Advanced High-Performance Computing (JCAHPC) in conjunction with the University of Tokyo and Tsukuba University. Asetek also announced liquid cooling technology is cooling eight installations in the November 2016 edition of the TOP500 list of the fastest supercomputers in the world.
CoolIT Systems is a leader in direct Contact Liquid Cooling energy efficient liquid cooling solutions for the HPC, Cloud and Enterprise markets. CoolIT’s solutions target racks of high-density servers. The technology can be deployed with any server in any rack, according to CoolIT. CoolIT has several OEMs including:
- Hewlett Packard Enterprise Apollo 2000 System
- NEC Blue Marlin
- Dell PowerEdge C6320
- Lenovo NeXtScale product offering
CoolIT Systems have also partnered with STULZ and showcased their Chip-to-Atmosphere concept within a micro datacenter. CoolIT was recently selected by the University of Toronto to provide custom liquid cooling for its new signal processing backend, which will support Canada’s largest radio telescope, the Canadian Hydrogen Intensity Mapping Experiment (CHIME), a joint project between the National Research Council of Canada (NRC) and three major Universities (McGill, Toronto, UBC).
Ebullient has developed a two-phase cooling system for data center servers. Low-pressure fluid, 3M Novec 7000, is pumped through flexible tubing to sealed modules mounted on the processors in each server. The fluid captures heat from the processors and transports it back to a central unit, where it is either rejected outside the facility or reused elsewhere in the facility or in neighboring facilities. Ebullient’s direct-to-chip systems can cool any server, regardless of make or model. Ebullient is an early stage company founded in 2013 based on technology developed University of Wisconsin. The company raised $2.3 million in January 2016
Green Revolution Cooling’s CarnotJet System is a liquid immersion cooling solution for data center servers. Rack-mounted servers from any OEM vendor can be installed in special racks filled with a dielectric mineral oil. On show at their SC16 booth was the Minimus server, their own design to further cost reduce the server component of the overall system. In December Green Revolution announced a strategic partnership with Heat Transfer Solutions (HTS), an independent HVAC manufacturers’ representative in North America. As part of the partnership, HTS is making a financial investment in GRC, which will provide growth capital as the company continues to expand its presence in the data center market. In addition, a new CEO was appointed to help grow the company.
LiquidCool Solutions is a technology development firm specializing in cooling electronics by total immersion in their own proprietary dielectric fluid. LiquidCool Solutions was originally founded in 2006 as Hardcore Computing with a focus on workstations, rebranding in 2012 to LiquidCool Solutions and its focus on servers. The company has demonstrated two new liquid submerged servers based on the Clamshell design. The Submerged Cloud Server, a 2U 4-node server designed for Cloud-computing applications and Submerged GPU Server, is a 2U dual node server designed for HPC applications that can be equipped with four GPU cards or four Xeon Phi boards.
LiquidMips showcased a server-cooling concept, a single processor chip immersed in 3M Fluorinert. It’s a long way from being a commercially viable product but represents another company entering the immersive cooling market.
Inspur Systems Inc., part of Inspur Group, showed two types of cooling solutions at SC16, a phase changing cooling solution with ultra-high thermal capacity, and a direct contact liquid cooling solution which allows users to maximize performance and lower operating expenses.
Allied Control specializes in 2-phase immersion cooling solutions for HPC applications. Having built the world’s largest 40MW immersion cooled data center with 252kW per single rack resulting in 34.7kW/sqm or 3.2kW/sqft incl. white space, Allied Control offers performance centric solutions for ultra-high density HPC applications. Allied Control utilizes the 3M Novec dielectric fluid. The BitFury Group (Bitcoin mining giant) acquired Allied Control in 2015. In January 2017 BitFury Group announced a deal Credit China Fintech Holdings to set up a joint venture that will focus on promoting the technology in China. As part of the deal, Credit China Fintech will invest $30 million in BitFury and the setting up of the joint venture that will sell BitFury’s bitcoin mining equipment.
ExaScaler Inc. is specialized in submersion liquid cooling technology. ExaScaler, and its sister company PEZY Computing, unveiled ZettaScaler-1.8, the first Super Computer with a performance density of 1.5 PetaFLOPS/m. The ZettaScaler-1.8 is an advanced prototype of the ZettaScaler-2.0 due to be released in 2017 with a performance density three times higher than the ZettaScaler-1.8. ExaScaler immersion liquid cooling using 3M Fluorinert cools ZettaScaler-1.8 Super Computer.
Fujitsu demonstrated a new form of data center, which included cloud-based servers, storage, network switch and center facilities, by combining the liquid immersion cooling technology for supercomputers developed by ExaScaler Inc. with Fujitsu’s know-how on general-purpose computers. Fujitsu is able to capitalize on three decades of liquid cooling expertise with mainframes, to supercomputers to Intel x86. This new style of data center uses liquid immersion cooling technology that completely immerses IT systems such as servers, storage, and networking equipment in liquid coolant in order to cool the devices. The liquid immersion cooling technology uses 3M’s Fluorinert, an inert fluid that provides high heat-transfer efficiency and insulation as a coolant. IT devices, including servers and storage, are totally submerged in a dedicated reservoir tank filled with liquid Fluorinert, and the heat generated from the devices is processed by circulating the cooled liquid through the devices. This improves the efficiency of the entire cooling system, thereby significantly reducing power consumption. A further benefit of the immersed cooling is that it provides protection from harsh environmental elements, such as corrosion, contamination, and pollution.
3M’s HPC offers solutions using 3M Engineered Fluids such as Novec or Fluorinert. Perhaps the winner at SC16 for immersed cooling is 3M as most of the vendors mentioned here use 3M Engineering Fluids. 3M fluids also featured in some of the networking products at the event. Fully immersed systems can improve energy efficiency, allow for significantly greater computing density, and help minimize thermal limitations during design.
Huawei announced a next-generation FusionServer X6000 HPC server that uses a liquid cooling solution featuring a skive fin micro-channel heat sink for CPU heat dissipation and processing technology where water flows through memory modules. This modular board design and 50ºC warm water cooling offers high energy-efficiency and reduces total cost of ownership (TCO).
HPE and Dell both introduced liquid cooling server products in 2016. Though they do not have the lineage of Fujitsu they nevertheless recognize the values liquid cooling delivers to the datacenter.
HPE’s entrance is the Apollo family of high-density servers. These rack-based solutions include compute, storage, networking, power and cooling. Target users are high-performance computing workloads and big data analytics. The top of the server lineup is the Apollo 8000 uses a warm water-cooling system whereas other members of the Apollo family of servers integrate the CoolIT Systems Closed-Loop DCLC (Direct Contact Liquid Cooling).
Dell, like HPE, does not have the decades of liquid cooling expertise of Fujitsu. Dell took the covers of the Dell Triton water cooling system in mid 2016. Dell’s Extreme Scale Infrastructure team built Triton as a proof of concept for eBay, leveraging Dell’s rack-scale infrastructure. The liquid-cooled cold plates directly contact the CPUs and incorporates liquid to air heat exchanges to cool the airborne heat generated by the large number of densely packed processor nodes.
Can we add liquid cooling to existing servers?
Good question, and the answer is no you really cannot. Adopting liquid cooling only makes sense on new server deployments. That is not to say it is impossible, but there are lots of modifications needed to make water cooling, like direct-to-chip, or fully immersed work, big maybe and not really recommended. An existing server has cooling fans that need to be disabled and CPU cooling towers removed and so on. You also need to add plumbing to your existing rack, which can be a pain. There is no question that a prospective user needs to consider the impact and requirements on existing datacenter infrastructure, the physical bricks, mortar, plumbing, etc. For users considering water-cooled solutions you will need to plumb water to the server rack. If you are in a new datacenter that is one level of effort but if your datacenter is a large closet in an older building, like 43 percent of North American datacenter/server rooms, it may be a lot more difficult and expensive. If you are considering a fully immersed solution, such as Fujitsu, no plumbing is required; all you need to do is hook up to chiller. It may be easier and less expensive than water-cooling. As a completely sealed unit it is conceivable that liquid immersed cooling solutions can be deployed almost anywhere, no datacenter required. Most vendors covered in this market are small emerging technology companies. Asetek’s data center revenue was $1.8 million in the third quarter and $3.6 million in the first nine months of 2016, compared with $0.5 million and $1.0 million in the third quarter and first nine months of 2015, respectively. Asetek is forecasting significant data center revenue growth in 2016 from $1.9M in 2015. CoolIT reported 2014 revenue of $27 million for all product categories. It is worth noting that Asetek and CoolIT data center revenues are less than 10% of total company revenue. The remaining 90% is workstation and PC liquid cooling solutions. Ebullient, Liquid MIPS, LiquidCooling, Green Revolution and Aquila have very few customers and probably below $10M annual revenues. The obvious question is since most of the vendors are small and very early stage – is there truly a market for liquid cooled servers? Industry analysts believe there is and forecast the market to grow from about $110 million in 2015 to almost $960 million in 2020, an additional $850 million of incremental revenue in just five years. With healthy future growth prospects, we’ve started to see larger players enter this market such as Fujitsu. In addition, the HPC system vendors are all OEMing liquid cooling technology solutions to solve the big system cooling issues in the datacenters. With the huge increase in data being generated, artificial intelligence and other applications need to mine this data. Consequently, more and more server power is required and new innovative cooling solutions needed making liquid cooling a practical and feasible solution. As a side note, more and more government RFPs are asking for liquid cooling solutions. Solutions such as the one from Fujitsu can make the crossover from HPC to commercial datacenter a reality. Could 2017 the breakout year for liquid cooling, move from innovator to early adopter? The Supercomputing Conference is frequently a window into the future. At SC16, there were over a dozen companies demonstrating server liquid cooling solutions, with technologies ranging from Direct-to-Chip to Liquid Immersive Cooling where servers and storage are fully immersed in dielectric fluid. Today the majority of providers are early stage or startup companies with a notable exception. Fujitsu, a global IT powerhouse brought over thirty years of liquid cooling experience and demonstrated an immersive cooling solution that had Intel-based servers, storage and network switches fully immersed in Fluorinert. We will see cooling technology move from the confines of high-end supercomputers to a nice niche in the enterprise datacenter for such workloads as big data analytics, AI and high frequency trading.
Data Center Liquid Cooling Market is Projected to Grow at a Healthy CAGR By 2026
Press Release • Feb 12, 2017 04:56 EST
With rise in IT spending and inclusion of IT in every aspect and operations has given to generation of huge amount of data. Such data are stored, processed, transferred through large number of servers, which are stacked. Day in day out these servers along with potential cyber threats have one primary threat which is heat. Every electronic devices generates heat as they run on power source either A.C or in D.C. For proper functioning of such servers and IT systems heat generated need to be controlled in order to prevent any malfunctioning, melting and damages of server’s chipsets and components. Cooling of computer components with liquids originated in the 1970s with IBM 3033 and the Cray-2. In the last ten years, however, with the increased awareness and initiative to go “green” to reduce energy consumption, developing viable industrial-grade liquid cooling systems for data center use became a priority. Current technology employs liquid immersion systems: submerging servers and other components in thermally, but not electrically, conductive liquids such as mineral-based oils. Over the forecast period, it is anticipated data center liquid cooling market is expected to grow over the forecast period.
A sample of this report is available upon request @http://www.persistencemarketresearch.com/samples/13330
In present era, every business houses, establishments are relying on data generated through various modes. Zillionth of data are being stored and transferred in a blink of an eye. In addition, data centers across globe are multiplying at a same speed at which it can accumulate all those data, which are being generated every second. The efficiency of these data centers is highly dependent on various cooling facilities, which is been installed in the data centers. Such cooling devices are designed so that they can be modified to offer adequate performance, regardless of the heat generated, and growing concern over saving environment and saving of energy are few possible factors, which will fuel and drive the demand for data center liquid cooling over the forecast period. Comprehending the nature of growth in data and data centers across globe it is impossible to point out any possible restraint for data center liquid cooling market. However, restraints can be locations specific. Priority of investments and fall in IT growth in some country can negatively affect the data center liquid cooling market.
Data Centre Liquid Cooling Market: Market Segmentation
Based on product type, data center liquid cooling market can be segmented into:
- Water Based
- Oil and Minerals Based
Based on the geographic regions, global data Centre liquid cooling market is segmented into seven key market segments namely North America, Latin America, Western Europe, Eastern Europe, Asia Pacific, Japan, and Middle East & Africa. Among the aforementioned regions, North America will dominate the data center liquid Cooling market over the forecast period owing to the fact that North America has the largest data centers in world and it has been foremost in developing and producing data center liquid cooling products. The countries such as China, India, and Thailand will be the key contributor to the growth of data Centre liquid Cooling market due to the very fact IT industry and subsequent data centers are increasing. Western Europe region will come next to APEJ with respect to growth of data Centre liquid cooling market. In Eastern Europe, the market of data center liquid cooling has also gathered momentum in recent years. MEA and Latin America market is yet to see progress in larger scale in data center liquid cooling market owing to a very meagre growth in data center businesses. However, over the forecast period it is anticipated that data center liquid cooling market will grow in this specific region.
A TOC of this report is available upon request @http://www.persistencemarketresearch.com/toc/13330
Is a Liquid-Cooled Data Center in Your Future?
by: Herb Zien – CEO of LiquidCool Solutions
Most data centers in operation today defy logic. They are cooled by circulating conditioned air around the data processing room and through the racks. Separate hot and cold aisles are maintained in an attempt to conserve energy. In most installations, cold air is forced up through holes in the floor. And humidity control is necessary to avoid condensation on IT equipment if too high or electrostatic discharge if too low. Air-cooled data centers are expensive to build and operate. Up to 15% of the total power supplied to a data center can be used to circulate air, and another 15% is used by rack and blade fans. Not only are fans inefficient, they fail. Fan cooling also limits power density, which is critical to reducing the white-space footprint as well as maintenance and infrastructure costs. Cooling with air creates problems beyond wasting energy and space. Contact between air and electronics leads to oxidation and tin whiskers. Pollutants in the air cause additional damage. Filters clog, resulting in overheating. Fans transmit vibrations that loosen solder joints, and they generate heat that must be dissipated. Many data centers operate at excessive noise levels from the fans, and OSHA regulations require earplugs. It gets even worse. Raising the temperature in a data center to reduce the need for mechanical refrigeration causes fans in the central air-handling system, CRAC units and device chassis to spin faster to move more air. Fan energy increases as the cube of the volume of air circulated, which means doubling the airflow requires eight times more energy. All of these problems can be avoided through liquid-cooled data centers. It’s simple physics. Liquids cool electronics 1,000 times more effectively than air. Air is an insulator with negligible heat capacity or thermal mass. Warm air rises and cold air sinks, so if a data center has a raised floor and cold air is blown uphill, energy is unnecessarily being wasted to fight gravity. Ironically some of the earliest computer installations were liquid cooled, but the technology available then was expensive, messy, difficult to maintain and inconvenient, and water leaks had the potential to be catastrophic. Air conditioning for employee comfort was already installed in the building, so the simplest thing to do was expand the AC system to pick up the additional cooling load of the server rooms. Rather than isolating and solving the data center cooling problem, a bandage was applied—an easy fix. A lot has changed in the past few years. Energy waste and carbon footprints have become high-visibility issues. Rack power densities have increased, in some cases to the point where air cooling is bumping against thermodynamic limits. The bandage is becoming unstuck. Importantly, some liquid cooling technologies available now overcome the perceptions that carried over from the old days. Liquid-cooled IT devices can be neat, easy to maintain, scalable and inexpensive. In some cases it is possible to commercially recycle much of the input energy to heat buildings or domestic hot water, cutting the carbon footprint even further. Three technologies have emerged to cool electronic equipment with liquids: cold plates, in-row cooling and immersion in a dielectric fluid. Cold plates, originally designed to enable gamers to overclock their machines, target the hottest or highest-power-density components in servers, namely the processors. Device fans, facility fans and other infrastructure are still required to cool other components that are not covered by cold plates. Additionally, cold plates are an ineffective way to cool switches, which lack point sources of heat. Cooling efficiency for cold-plate systems can be 50% better than air. In-row cooling is essentially an attempt to make the room around the IT equipment smaller. This technology can reduce cooling energy by 60% compared with air, but it still requires all the elements of a complete data center air-conditioning system. Immersive cooling means that electronics are totally immersed in a nonconducting dielectric fluid, thereby decoupling electronics from the room and eliminating fans. A closed cycle dissipates heat. Some direct-contact systems are single phase, where the dielectric fluid remains a liquid throughout the heat-dissipation cycle. Others use a two-phase system in which the fluid boils and then condenses. Cooling efficiency for an immersive system can be more than 90% better than air. If an organization is considering liquid cooling to address capital cost, operating cost, space, reliability, noise or carbon-footprint problems, immersive-cooling systems are a logical choice. A number of technologies are commercially available, and the devil is in the details, but immersing electronics in a dielectric fluid instead of an air bath offers significant benefits:
– The highest possible thermal efficiency
– No rack or chassis fans to fail
– No oxidation or corrosion of electrical contacts
– Reduction in the thermal fluctuations that drive solder-joint failures
– Much lower operating temperatures for the board and components
– No exposure to electrostatic-discharge events
– No fretting corrosion of electrical contacts induced by structural vibration caused by chassis fans
– No sensitivity to ambient particulate, humidity or temperature conditions
– Waste energy can be recaptured in a form convenient for recycling
In addition to the obvious space and power benefits, immersive cooling eliminates the need to purchase, install and maintain chillers, room air handlers, humidity-control systems, water-treatment equipment and air-filtration equipment. It is curious that, considering its obvious advantages, immersive cooling is only now beginning to get market traction. The status quo has a lot of inertia, but it’s not just about power density. Steve Jobs summed it up best: “It takes a lot of hard work to make something simple, to truly understand the underlying challenges and come up with elegant solutions.” Liquid cooling, cleverly executed, can be an elegant solution to reducing data center energy waste, water usage, carbon footprint and cost. The brass-era generation did not trade up from a horse and carriage to a horseless carriage to go 30 miles per hour; they did it to get rid of the horse! The horse used far too much energy, took far more space and polluted the environment. Fans are the horses of the digital age, and immersive cooling is the only certain way to completely eliminate fans.