> EXTREME COMPUTING > bullx systems >

BULLX DLC BLADE SYSTEM (B700 SERIES)

To reduce energy consumption by a third
while preserving the serviceability
of your supercomputer

bullx B700 DLC (Direct Liquid Cooling) blades offer revolutionary computing nodes for data centers. While providing an unequalled level of power for the most demanding applications, they also help reduce the data center’s electricity bills to enable organizations to innovate even faster and take the lead into the future.

The exponential growth in power in supercomputers has gone hand in hand with a strong increase in their power consumption over the past few years. On average, electricity accounts for 28% of data center operating costs in Europe, and the significant increase in the price of a kWh expected in the coming years will further accentuate the weight of electricity in the overall operating budget.

Constant optimization of the PUE*

In an attempt to keep operating budgets reasonable, Bull has long been committed to providing ways of optimizing data center PUE*. Ideally, energy should be consumed only by servers, resulting in a PUE of 1.0. In practice, a data center involves other electric equipment, notably air conditioning systems, which means that even the most efficient only offer a PUE of about 1.8.
A first step towards PUE optimization consists in moving the cold production unit as close as possible to the equipment to be cooled. Bull has been proposing this for several years with its cool cabinet. This technique enables PUE to be brought as low as 1.4, without altering any of the IT equipment.

* PUE (Power Usage Effectiveness) is the ratio between the energy consumed by the entire data center and the energy consumed by the actual servers.

Consumption reduced by a third with direct liquid cooling

To further reduce the PUE, the servers must be redesigned so that the heat generated by the main components can be evacuated by a liquid, as close as possible to the source of heat. This direct liquid cooling (DLC) concept is implemented in the bullx B700 range: cooling occurs within the blade itself, by direct contact between the hot components (processor, memory, etc.) and a cooled plate with active liquid flow.

Bearing in mind that a processor operates normally at more than 50°C, memory and SSD disks at more than 40°C, and HDDs at up to 35°C, even water at room temperature is enough to cool them. The design of the bullx DLC B700 makes it possible to cool the system with incoming water at 35°C. It is therefore no longer necessary to produce cold water, which represents considerable savings in terms of electricity consumption, enabling a PUE of 1.1 to be achieved in optimal operating conditions.

To optimize the global energy efficiency of the data center, Bull recommends to re-use the generated thermal energy, for example to heat a nearby building.

Unchanged serviceability

The bullx DLC cabinet contains all the equipment required for the water circuits. It simply has to be connected to the site’s room temperature water circuit. Inside the cabinet, a ready-to-use closed water circuit evacuates the heat from the compute blades via a heat exchanger located at the bottom of the cabinet.

Each chassis constitutes a closed water circuit that can be removed for maintenance without shutting down the entire cabinet. Similarly, each blade is connected to the chassis’ water circuit by dripless connectors, and can be removed as easily as any traditional blade, without stopping operating of the neighboring blades. To avoid contact between the water and the 230V electric lines, the hydraulic circuits are located at the bottom of the cabinet while the electricity supply is at the top.

Furthermore, unlike many of the liquid cooling solutions available on the market, the bullx B700 range uses totally standard components: Intel® Xeon® processors, standard memory bars and disks, which also helps to make maintenance easier.

Finally, for the comfort of all users, the bullx B700 DLC blade system is ultra-silent.

A scalable range for the long term

Based on an innovative architecture, the DLC blade range is designed to integrate the new generations of of Intel® Xeon Phi™ coprocessors and NVIDIA® Tesla GPUs.

bullx DLC B700 cabinet

bullx DLC cabinet

The bullx DLC B700 cabinet supports up to 80kW of electrical power. It includes:

  • Up to five 15 kW (max) bullx DLC chassis if the Cooperative Power Chassis (CPC) is in an auxiliary box on top of the 42U cabinet (four chassis if the CPC is housed within the cabinet)
  • One Cooperative Power Chassis (CPC) which can contain up to 27 Power supplies of 3KW each in seven cooperative power shelves
  • One Hydraulic chassis (HYC) in the standard configuration. A second HYC can be installed for redundancy.

Dissipating 80kW being simply impossible with an air-cooled solution, the bullx DLC blade system proposes a direct-liquid cooling solution.

This solution is based on heat exchange, at the board level, between the IC packages and the so-called ''cold plates''. A cooling liquid flows through these cold plates and is pumped up to a rack liquid-to-liquid main heat exchanger (HYC), where the rack heat is removed to the customer water-cooling loop. The rack distributes the cooling fluid to the five server chassis and returns the fluid to the HYC heat exchangers.

bullx DLC B700 chassis

bullx DLC chassis


bullx DLC chassis

This rack mount 7U drawer hosts up to 9 hot-swap double blades, i.e. 18 dual-socket compute nodes. It includes:

Front side:
  • 1 front blower box (FBB) for CMM and ESM cooling
  • 1 Display Panel with power and reset push buttons
Rear side:
  • 2 Power Distribution Modules. Each PDM provides 54V through multiple connectors to the midplane
  • Up to 2 Infiniband Switch Modules which integrate ConnectX-3 (new-generation FDR Infiniband switch).
  • 1 Chassis Management Module (CMM).
  • 1 optional 1Gb/s or 10Gb/s Ethernet Switch Module that provides Ethernet access to the client's backbone network by the 9 compute blades.
  • 6 Disk blower boxes to cool the HDDs installed on the compute nodes.

All modules are interconnected through the mid-plane.

bullx DLC B710 compute blade

Compute node



The bullx B710 compute blade is a hot-plug blade based on the Romley-EP platform from Intel. It is built upon a cold plate which cools all components by direct contact, except DIMMS for which custom heat spreaders evacuate the heat to the cold plate. This double blade includes:

  • 2 x 2 Intel® Xeon® processors E5-2600 v2 Family (code-named IvyBridge EP)
  • 2 x Intel® chipset C600 series (Patsburg PCH)
  • 2 x 8 DDR3 DIMM sockets, i.e. up to 2 x 256GB Reg EC DDR3 (with 32GB DIMMs)
  • Up to 2 x 2 SATA disks (2.5") or 2 x 2 SSD disks (2.5") – max. height 15mm
  • 2 x 1 1Gb dual port Ethernet controller for links to CMM and ESM or TSM

bullx DLC B715 accelerator blades

bullx B715


Developed by the Bull R&D teams, the bullx DLC B715 accelerator blades are based on the same direct cooling system as the bullx DLC B710 compute blades, but they associate within the same blade:

  • two Intel® Xeon® processors, AND
  • two latest generation NVIDIA® Tesla™ GPUs OR
    two latest generation Intel® Xeon Phi coprocessors.

They can be combined with B710 compute blades within the same bullx Direct Liquid Cooling chassis.

 

bullx blade system
bullx DLC blade system (B700 series): technical fact sheet

Direct Liquid Cooling to optimize Data Center consumption.

Download

 

Retour haut de page
Print page.Send page.Share on Facebook.Share on Linkedin.Share on Viadeo.Share on Technorati.Share on Digg.Share on Delicious.Bookmark this page on Google.Share on Windows Live.Share on Twitter.
Contact us

21/03/2014
High-Performance Computing is strategic for Météo-France

An exclusive video interview of Alain Beuraud, Project Manager “Calcul 2013”, Météo-France.
Watch the video >>

April 2014
Special issue of magazine L'Usine Nouvelle - The champions of SIMULATION

Discover the ten "champions" that have made digital simulation one of their key skills for industrial success.
Read more >>