> EXTREME COMPUTING > bullx systems >


To reduce energy consumption by a third
while preserving the serviceability
of your supercomputer

bullx B700 DLC (Direct Liquid Cooling) blades offer revolutionary computing nodes for data centers. While providing an unequalled level of power for the most demanding applications, they also help reduce the data center’s electricity bills to enable organizations to innovate even faster and take the lead into the future.

The exponential growth in power in supercomputers has gone hand in hand with a strong increase in their power consumption over the past few years. On average, electricity accounts for 28% of data center operating costs in Europe, and the significant increase in the price of a kWh expected in the coming years will further accentuate the weight of electricity in the overall operating budget.

Constant optimization of the PUE*

In an attempt to keep operating budgets reasonable, Bull has long been committed to providing ways of optimizing data center PUE*. Ideally, energy should be consumed only by servers, resulting in a PUE of 1.0. In practice, a data center involves other electric equipment, notably air conditioning systems, which means that even the most efficient only offer a PUE of about 1.8.
A first step towards PUE optimization consists in moving the cold production unit as close as possible to the equipment to be cooled. Bull has been proposing this for several years with its cool cabinet. This technique enables PUE to be brought as low as 1.4, without altering any of the IT equipment.

* PUE (Power Usage Effectiveness) is the ratio between the energy consumed by the entire data center and the energy consumed by the actual servers.

Consumption reduced by a third with direct liquid cooling

To further reduce the PUE, the servers must be redesigned so that the heat generated by the main components can be evacuated by a liquid, as close as possible to the source of heat. This direct liquid cooling (DLC) concept is implemented in the bullx B700 range: cooling occurs within the blade itself, by direct contact between the hot components (processor, memory, etc.) and a cooled plate with active liquid flow.

Bearing in mind that a processor operates normally at more than 50°C, memory and SSD disks at more than 40°C, and HDDs at up to 35°C, even water at room temperature is enough to cool them. The design of the bullx DLC B700 makes it possible to cool the system with incoming water at 40°C. It is therefore no longer necessary to produce cold water, which represents considerable savings in terms of electricity consumption, enabling a PUE of 1.1 to be achieved in optimal operating conditions.

To optimize the global energy efficiency of the data center, Bull recommends to re-use the generated thermal energy, for example to heat a nearby building.

Unchanged serviceability

The bullx DLC cabinet contains all the equipment required for the water circuits. It simply has to be connected to the site’s room temperature water circuit. Inside the cabinet, a ready-to-use closed water circuit evacuates the heat from the compute blades via a heat exchanger located at the bottom of the cabinet.

Each chassis constitutes a closed water circuit that can be removed for maintenance without shutting down the entire cabinet. Similarly, each blade is connected to the chassis’ water circuit by dripless connectors, and can be removed as easily as any traditional blade, without stopping operating of the neighboring blades. To avoid contact between the water and the 230V electric lines, the hydraulic circuits are located at the bottom of the cabinet while the electricity supply is at the top.

Furthermore, unlike many of the liquid cooling solutions available on the market, the bullx B700 range uses totally standard components: Intel® Xeon® processors, standard memory bars and disks, which also helps to make maintenance easier.

The chassis can be optionally fitted with ultra capacitors that protect it from power outages up to 300 ms – up to 800 ms if dynamic power reduction is activated on the compute nodes.

Finally, for the comfort of all users, the bullx B700 DLC blade system is ultra-silent.

bullx DLC B700 cabinet

bullx DLC cabinet

The bullx DLC B700 cabinet supports up to 80kW of electrical power. It includes:

  • Up to five 15 kW (max) bullx DLC chassis if the Cooperative Power Chassis (CPC) is in an auxiliary box on top of the 42U cabinet (four chassis if the CPC is housed within the cabinet)
  • One Cooperative Power Chassis (CPC) which can contain up to 27 Power supplies of 3KW each in seven cooperative power shelves
  • One Hydraulic chassis (HYC) in the standard configuration. A second HYC can be installed for redundancy.

Dissipating 80kW being simply impossible with an air-cooled solution, the bullx DLC blade system proposes a direct-liquid cooling solution.

This solution is based on heat exchange, at the board level, between the IC packages and the so-called ''cold plates''. A cooling liquid flows through these cold plates and is pumped up to a rack liquid-to-liquid main heat exchanger (HYC), where the rack heat is removed to the customer water-cooling loop. The rack distributes the cooling fluid to the five server chassis and returns the fluid to the HYC heat exchangers.

bullx DLC B700 chassis

bullx DLC chassis

bullx DLC chassis

This rack mount 7U drawer hosts up to 9 hot-swap double blades, i.e. 18 dual-socket compute nodes. It includes:

Front side:
  • 1 front blower box (FBB) for CMM and ESM cooling
  • 1 Display Panel with power and reset push buttons
Rear side:
  • 2 Power Distribution Modules. Each PDM provides 54V through multiple connectors to the midplane. They can be equipped with ultra capacitors to offset power outages up to 300ms.
  • Up to 2 Infiniband Switch Modules which integrate ConnectX-3 (new-generation FDR Infiniband switch).
  • 1 Chassis Management Module (CMM).
  • 1 optional 1Gb/s or 10Gb/s Ethernet Switch Module that provides Ethernet access to the client's backbone network by the 9 compute blades.
  • 6 Disk blower boxes to cool the HDDs installed on the compute nodes.

All modules are interconnected through the mid-plane.

bullx DLC B720 compute blade

bullx blades

Compute node

The bullx B720 compute blade is a hot-plug blade based on the Grantley-EP platform from Intel. It is built upon a cold plate which cools all components by direct contact, except DIMMS for which custom heat spreaders evacuate the heat to the cold plate. This double blade includes:

  • 2 x 2 Intel® Xeon® processors E5-2600 v3 Family (code-named Haswell EP)
  • 2 x Intel® chipset C610 series (Wellsburg)
  • 2 x 8 DDR4 DIMM sockets, i.e. up to 2 x 256GB Reg EC DDR4 (with 32GB DIMMs)
  • Up to 2 x 2 SATA disks (2.5") or 2 x 2 SSD disks (2.5") – max. height 15mm
  • 2 x 1 1Gb dual port Ethernet controller for links to CMM and ESM or TSM

bullx DLC B715 accelerator blades

bullx B715

Developed by the Bull R&D teams, the bullx DLC B715 accelerator blades also integrate the direct cooling system mechanism, but they associate within the same blade:

  • two Intel® Xeon® processors, AND
  • two latest generation NVIDIA® Tesla™ GPUs OR
    two latest generation Intel® Xeon Phi coprocessors.


bullx blade system
bullx DLC blade system (B700 series): technical fact sheet

Direct Liquid Cooling to optimize Data Center consumption.



Retour haut de page
Print page.Send page.Share on Facebook.Share on Linkedin.Share on Viadeo.Share on Technorati.Share on Digg.Share on Delicious.Bookmark this page on Google.Share on Windows Live.Share on Twitter.
Contact us
Bull exascale program

High-Performance Computing is strategic for Météo-France

An exclusive video interview of Alain Beuraud, Project Manager “Calcul 2013”, Météo-France.
Watch the video >>

April 2014
Special issue of magazine L'Usine Nouvelle - The champions of SIMULATION

Discover the ten "champions" that have made digital simulation one of their key skills for industrial success.