Skip to main content

AI SuperCluster

Enhance the training and interference of next-generation large language models with trillions of parameters

As current technological advancements progress, new innovative opportunities arise to accelerate the development and availability of large-scale, ultra-complex advanced AI language models that leverage an immense number of parameters.

For this, an adequate infrastructure is needed to achieve ultra-efficient performance with proper resource management.
 

abstract

 

What we offer

Today, Bull is one of the few suppliers of the NVIDIA GB200 NVL72, thanks to a strategic collaboration with Supermicro and a close relationship with NVIDIA. This new technology has been recently developed to meet the very specific needs of large LLMs and serve as an indispensable accelerator.

Bull not only sells the rack but, more importantly, supports its customers in adopting this technology by providing a comprehensive all-in-one solution, including installation, configuration, and multi-level support. Thanks to its field-proven expertise, local presence, and close client relationships, Bull positions itself as a unique provider that understands customer needs and pain points, ensuring the fast deployment and success of the project.

  • Enhanced performance:
    It enables 4 times faster training for large language models at scale and delivers 30 times faster real-time large language model (LLM) inference compared to previous models.

  • Advanced architecture:
    Scales up computational
    power by connecting 36 Grace CPUs and 72 Blackwell GPUs in a rack-scale design, providing 130 terabytes per second (TB/s) of low-latency GPU communications.
  • Energy efficiency:
    The hybrid cooling rack design
    integrates air cooling with direct liquid cooling (DLC) to enhance efficiency and performance, significantly reducing a data center’s carbon footprint and energy consumption.
AI-SuperCluster-640x360

Embark on your journey to unparalleled infrastructure availability and performance with Bull

check

Technology readiness

 A pre-integrated rack solution that ensures the best time to market for a unique and advanced AI infrastructure.

rating_black

Exlusive access

Don't miss out on the opportunity to be among the first to secure the NVIDIA’s GB200 NVL72.

service_black

Local expertise

At Bull, our experienced team leverages local knowledge to offer solutions aligned with customer needs.

growth_icon_outline_dark

100% European Infrastructure

From hardware to virtualization platform, ensuring sovereignty, compliance, and full auditability through open-source engineering.

end_to_end_icon_dark_outline

Reliable, Cost-Efficient Performance

Robust and future-proof virtualization solution delivering 60–80% lower TCO compared to traditional options.

end_to_end_icon_dark_outline

Tailored, Scalable Architecture

Flexible configurations with BullSequana SH or BullSequana SA to meet diverse workload needs.

end_to_end_icon_dark_outline

High Density and Low Latency

Lightweight hypervisor and optimized design for maximum VM-to-core ratio.

growth_icon_outline_dark

End-to-End Security

Full supply chain transparency, advanced hardware protection, and trusted execution technologies.

end_to_end_icon_dark_outline

Energy Efficiency

Reduced power consumption per workload with eco-conscious infrastructure design and smart VM scheduling.

end_to_end_icon_dark_outline

Easy Expansion and Automation

Modular, composable hardware with API-driven orchestration for seamless scalability.

end_to_end_icon_dark_outline

Easy Deployment

An already validated reference architecture and a SPOC for hardware-software integration and support.

In partnership with

Supermicro_logoNVIDIA_logo

Related resources

Get in touch