bullx B505 accelerator blade
 • Embedded GPUs eliminating bottlenecks
 
bullx B505 accelerator blade
bullx B505 accelerator blades are designed for full bandwidth between each GPU and host CPU, and for double interconnect bandwidth between blades.

 
  • Double-width blade
  • 2 NVIDIA Tesla M2050/M2070/M2070Q/M2090 GPUs
  • 2 Intel® Xeon® 5600 CPUs
  • 1 dedicated PCI-e 16x connection for each GPU
  • Double InfiniBand QDR connections between blades
  •  
    RedHatSuSE
     
    Overview
     
    Characteristics
     
    Resources
     
    Ordering
     
    Certificates
     
    Warranty
     
     
     
     
    The bullx chassis can host indifferently 18 single-width compute blades (B500 model) each containing two CPUs, or 9 double-width accelerator blades (B505 model) containing two CPUs and two GPUs - or any combination of those two types of blades.
    The bullx B505 accelerator blade packs two Intel® Xeon® 5600 quad-core CPUs and two NVIDIA® Tesla™ 20-Series GPUs in a double-width blade. Each NVIDIA Tesla computing card offers 488 cores, up to 1.15GHz (3GB or 6GB total dedicated memory per GPU, depending on model).

      The bullx B505 accelerator blades are the only blades on the market designed for full bandwidth between each GPU and host CPU, and for double interconnect bandwidth between blades
    To take the most out of GPU accelerators, they should be integrated as much as possible within the host server:
  • A key feature to optimize application efficiency is the bandwidth between the accelerator card and the host processor. To maximize the performance of the two GPUs, Bull has therefore chosen to have one dedicated PCI-e 16x connection for each GPU.
  • For applications parallelized over several blades, such as Reverse Time Migration algorithms, it is important to minimize data transfer time between GPUs, and thus to support a large bandwidth between blades. The bullx B505 blade includes two InfiniBand QDR network connections, so that each GPU has access to QDR bandwidth. The two InfiniBand QDR ports are connected to the first level switch integrated in the bullx chassis.
  •  
     
     
    Extreme Computing
     
     bullx system
    Blade systems
     bullx chassis
     bullx B500
     bullx B505 
    Rack systems
    Super Nodes
     Accelerators
     Graphics
     StoreWay
    StoreWay Optima
     Connectivity
    Ethernet
     Rack Cabinets
    bullx Rack Cabinets
    Accessories
     
     
    Top of Page 
     © 2014 Bull SAS. All Rights reserved.