bullx B700 DLC (Direct Liquid Cooling) blades offer revolutionary computing nodes for data centers. While providing an unequalled level of power for the most demanding applications, they also help reduce the data center’s electricity bills to enable organizations to innovate even faster and take the lead into the future.
The exponential growth in power in supercomputers has gone hand in hand with a strong increase in their power consumption over the past few years. On average, electricity accounts for 28% of data center operating costs in Europe, and the significant increase in the price of a kWh expected in the coming years will further accentuate the weight of electricity in the overall operating budget.
In an attempt to keep operating budgets reasonable, Bull has long been committed to providing ways of optimizing data center PUE*. Ideally, energy should be consumed only by servers, resulting in a PUE of 1.0. In practice, a data center involves other electric equipment, notably air conditioning systems, which means that even the most efficient only offer a PUE of about 1.8.
A first step towards PUE optimization consists in moving the cold production unit as close as possible to the equipment to be cooled. Bull has been proposing this for several years with its cool cabinet. This technique enables PUE to be brought as low as 1.4, without altering any of the IT equipment.
* PUE (Power Usage Effectiveness) is the ratio between the energy consumed by the entire data center and the energy consumed by the actual servers.
To further reduce the PUE, the servers must be redesigned so that the heat generated by the main components can be evacuated by a liquid, as close as possible to the source of heat. This direct liquid cooling (DLC) concept is implemented in the bullx B700 range: cooling occurs within the blade itself, by direct contact between the hot components (processor, memory, etc.) and a cooled plate with active liquid flow.
Bearing in mind that a processor operates normally at more than 50°C, memory and SSD disks at more than 40°C, and HDDs at up to 35°C, even water at room temperature is enough to cool them. The design of the bullx DLC B700 makes it possible to cool the system with incoming water at 40°C. It is therefore no longer necessary to produce cold water, which represents considerable savings in terms of electricity consumption, enabling a PUE of 1.1 to be achieved in optimal operating conditions.
To optimize the global energy efficiency of the data center, Bull recommends to re-use the generated thermal energy, for example to heat a nearby building.
The bullx DLC cabinet contains all the equipment required for the water circuits. It simply has to be connected to the site’s room temperature water circuit. Inside the cabinet, a ready-to-use closed water circuit evacuates the heat from the compute blades via a heat exchanger located at the bottom of the cabinet.
Each chassis constitutes a closed water circuit that can be removed for maintenance without shutting down the entire cabinet. Similarly, each blade is connected to the chassis’ water circuit by dripless connectors, and can be removed as easily as any traditional blade, without stopping operating of the neighboring blades. To avoid contact between the water and the 230V electric lines, the hydraulic circuits are located at the bottom of the cabinet while the electricity supply is at the top.
Furthermore, unlike many of the liquid cooling solutions available on the market, the bullx B700 range uses totally standard components: Intel® Xeon® processors, standard memory bars and disks, which also helps to make maintenance easier.
The chassis can be optionally fitted with ultra capacitors that protect it from power outages up to 300 ms – up to 800 ms if dynamic power reduction is activated on the compute nodes.
Finally, for the comfort of all users, the bullx B700 DLC blade system is ultra-silent.
The bullx DLC B700 cabinet supports up to 80kW of electrical power. It includes:
Dissipating 80kW being simply impossible with an air-cooled solution, the bullx DLC blade system proposes a direct-liquid cooling solution.
This solution is based on heat exchange, at the board level, between the IC packages and the so-called ''cold plates''. A cooling liquid flows through these cold plates and is pumped up to a rack liquid-to-liquid main heat exchanger (HYC), where the rack heat is removed to the customer water-cooling loop. The rack distributes the cooling fluid to the five server chassis and returns the fluid to the HYC heat exchangers.
This rack mount 7U drawer hosts up to 9 hot-swap double blades, i.e. 18 dual-socket compute nodes. It includes:Front side:
All modules are interconnected through the mid-plane.
The bullx B720 compute blade is a hot-plug blade based on the Grantley-EP platform from Intel. It is built upon a cold plate which cools all components by direct contact, except DIMMS for which custom heat spreaders evacuate the heat to the cold plate. This double blade includes:
Developed by the Bull R&D teams, the bullx DLC B715 accelerator blades also integrate the direct cooling system mechanism, but they associate within the same blade: