> EXTREME COMPUTING > bullx systems >

BULLX BLADE SYSTEM (B500 SERIES)

Designed without compromise by the Bull R&D teams
for High Performance Computing
to give you unlimited power to innovate

The bullx blade system was designed by Bull's Extreme Computing R&D team - Europe's largest team of experts in the field of high performance computing - with the following principles in mind:

bullx blades
  • Optimization and simplification of the compute node for HPC use
  • Integration of several compute nodes and first-level interconnect
  • Flexible structure for communication and I/O networks, for the closest fit with customer requirements

The bullx B510 compute blades - the second generation of bullx blades - were entirely re-designed to further optimize their compute performance and energy consumption, but they are fully compatible with the existing chassis. An existing B500-based system can therefore be upgraded on site, by simply replacing the previous blades by bullx B510 blades.

The bullx blade system leverages the latest technological advances such as:

  • New-generation Intel® Xeon® processors (E5-2600 v2 Family, code-named Ivybridge-EP), delivering ever more computing power
  • InfiniBand network connection through an integrated 36-ports QDR or FDR switch
  • Local storage on SSD
  • Availability of accelerated blades – the bullx chassis can indifferently house B510 compute blades or B515 accelerated blades, or a mix of both

Thanks to these innovations, the bullx blade system delivers:

  • Improved performance (enhanced compute node efficiency, reduced latency, higher communication and I/O throughput)
  • Reduced cost of ownership (fewer components, better fit of communication network, reduced installation time, improved upgradeability, power efficiency)
  • Enhanced reliability (less cabling)

bullx blade chassis: high density, uncompromised performance and optimized power consumption

Chassis


Chassis

The bullx blade chassis can host up to 9 double compute blades, i.e. 18 compute nodes in 7U. It also contains the first-level interconnect, a management unit and all components necessary to power and cool the blades, interconnect and management unit. It is the ideal foundation to build a medium to large-scale HPC cluster, with bullx compute blades combined with service nodes of the bullx R423 family.

The bullx blade chassis provides a peak performance of up to above 9 Tflops (with CPUs) and offers a fully non-blocking InfiniBand QDR or FDR interconnect.

An ultracapacitor (option) protects the chassis from power outages up to 250ms. In areas with good quality power supplies, the ultracapacitor means a UPS is not needed – saving up to 15% on power consumption.

The chassis is equipped with high-efficiency (+90%) power supply units, and power consumption is further optimized by the smart fan control in each chassis.

bullx B510 compute blade

bullx B510
bullx B510

The bullx B510 is a double-width blade including two compute nodes, so as to have some components in common, such as fans, to optimize energy consumption. Each node features:

  • 2 new-generation Intel® Xeon® processors E5-2600 v2 Family
  • Up to 256 GB DDR3 memory per node
  • 1 SATA SSD drive per node

bullx B515 accelerator blade: embedded accelerators eliminating bottlenecks

bullx B515


bullx B515
  • Double-width blade
  • 2 Intel® Xeon® E5 v2 familly processors
  • 2 NVIDIA® Tesla™ GPUs
    OR
  • 2 Intel® Xeon® Phi™ coprocessors
  • Dedicated PCI-Express x16 slot for each accelerator
  • Memory: up to 192 GB per blade
  • 2 ConnectX®-3 adapters providing single QDR/FDR IB channel

The bullx B515 are the only blades on the market designed for full bandwidth between each accelerator and host CPU, and for double interconnect bandwidth between blades

 

bullx blade system
bullx blade system: technical specifications flyer

Ultra-concentrated and modular systems designed without compromise for unlimited innovation

 

Retour haut de page
Print page.Send page.Share on Facebook.Share on Linkedin.Share on Viadeo.Share on Technorati.Share on Digg.Share on Delicious.Bookmark this page on Google.Share on Windows Live.Share on Twitter.
Contact us

23/09/2013
Bull sponsors the UberCloud HPC Experiment

This free and voluntary HPC Experiment brings industry end users and service providers together to explore technical computing as a service.
Read more >>

April 2014
White Paper - Unleash the performance of your HPC applications

This white paper, prefaced by HPC analysts Intersect360 Research, identifies HPC performance inhibitors, and presents the best practices that can be implemented to avoid them, while optimizing energy efficiency.
Read more >>