High Performance Computing (HPC) has become an essential tool in the worlds of research and industry alike. From aeronautics, climatology and bioscience to finance and sport, computer modeling and simulation have made inroads into most areas.
Quite simply because HPC provides today’s researchers with an essential means of investigation, creates major competitive advantage for manufacturers and business, and is even fundamental to countries’ national sovereignty.
There’s no doubt that acquiring the necessary technology for HPC represents a strategic investment. Which means it is crucial to find a partner that is capable of understanding its customers’ specific requirements, designing the most appropriate architecture and providing the essential technology support to ensure you get the best out of your HPC system.
As Europe’s leading manufacturer of servers and HPC solutions, Bull has all the human and physical resources it needs to create and sustain an offering that meets even the most demanding requirements for scientific or industrial computing, including:
• Modular, integrated solutions, to respond to the precise needs of each organization while also simplifying administration
• Solutions that capitalize on the expertise accumulated by Bull over many years of working on large-scale systems architectures, operating systems and software/systems administration environments
• Solutions that have been chosen by organizations as diverse as: the Atomic Energy Authority (CEA) in France, for whom we built TERA-10, the most powerful supercomputer in Europe; Dassault-Aviation, the world’s leading business aircraft manufacturer; Pininfarina, the world famous Italian automotive designer; and the National Oceanography Centre in Southampton, one of the most highly respected research centers in this area.
In the old days, the high cost of HPC meant it was limited to a few select areas. Today, high performance computing is affordable and available to a growing number and range of organizations, with the emergence of solutions that deliver very high levels of performance for significantly lower cost, based around clusters or grids of standard servers. Searching for new oil reserves, studying the human genome, simulating transport crashes… HPC solutions are helping us to push back the boundaries in so many areas of society, to solve new kinds of problems or explore existing, complex problems in more detail or more quickly.
The two key pillars of HPC strategy
How do you offer the highest possible availability? And how do you ensure the same kind of flexibility and ease of use you’d expect from a traditional system? How do you ensure that your system is scalable, and capable of supporting the load that tomorrow’s demands may require? With the rapid expansion in technology, Bull is ready with the answers, putting the emphasis on two determining factors:
1. Open systems, based on industry standards, for greater flexibility and a highly competitive price/performance ratio
As the foundations of our NovaScale servers, widely available standard processors and Open Source software or software supplied by independent vendors are the key to better price/performance ratios and greater flexibility. They are also the best way to guarantee on-going return on your investment and to ensure that your applications can rapidly take advantage of the wealth of innovations coming out of the research and business communities behind Open Source and standards-based products.
2. Highly effective cluster architectures, for the best possible fit with your applications
Cluster infrastructures designed by Bull are of appropriate size and granularity, enabling customers’ applications to deliver optimal performance, perfectly in tune with the applications used and the amount of data being transferred. The cluster is easily installed and administered from a single point of control that allows to manage not only the hardware platform, but also the interconnectivity network and storage. And since HPC systems are used for production, where high levels of availability are essential, clusters can continue to operate even if one or more components fail.
“From processor-centric to data-centric”: putting data at the heart of IT architecture
Until now, users and their applications had to adapt themselves to the computers. But Bull is keen to offer the opposite: solutions that adapt to the applications they are running – because after all, it is the data that is at the heart of users’ jobs, not computers. With this in mind, Bull is currently developing the next generation of technology solutions, where data is at the heart of the system architecture. Via the middleware layer, the data will be able to be used in every phase of a scientific or engineering task from initial modeling to computation and exploitation of the final results. Compared with existing solutions, the main effect will be to add middleware layers that make the use of the various IT resources involved in different phases much more transparent. By adapting much more closely to user needs, this kind of architecture will improve overall system efficiency to an even greater extent.
Bull’s expertise in complex IT infrastructures and architectures means we know just how to optimize data management in HPC clusters. This is an area dogged by the same kinds of problems of balancing processing capacity, I/O flows and hierarchical storage organization as are found in large-scale dedicated data management systems.
More than others, Bull combines an extensive expertise over a long period in mainframe/large-scale server architecture, operating systems and system administration, with open systems based on industry standard hardware and software components.
We know the meaning of mainframe-class reliability and performance!