Contents
Editorial
Hot topics
Executive opinion
Business cases
Experts voice
Solutions
At a glance
Quick poll
Events
PDF version
 

Subscribe to
Bull Direct:


 

Archives
n°24  |  March   2008
Executive opinion

Bull, “Architect of an Open World™”: as applied to High-Performance Computing
By Fabio Gallo, Director of Bull's HPC Solutions

Vice-President, and Director of HPC Solutions at Bull since February 2008, Fabio Gallo was previously Vice-President, responsible for sales development for Europe and the Middle East at Scali, a company specializing in middleware for networked HPC solutions. Before this, Fabio Gallo had a number of significant responsibilities at IBM, SGI and most recently Linux Networx Inc. where he increased European and Middle Eastern revenues by a factor of 5 in 3 years.

The power of High-Performance Computing (HPC) systems is growing at a breathtaking pace: in the space of just a year, the cumulative power of the world’s 500 most powerful systems has doubled. Given that a whole multitude of technologies and architectures have been used to build these systems, Bull is taking up its position. Because not all infrastructures and not all architectures are equal... Depending on the type of application involved, some will be appropriate, but others risk disappointing users when they are being used in real life. Bull has made some decisive strategic choices to enable its customers to act with a high degree of freedom, putting the emphasis on two key factors:

  1. Open systems based on industry standards, guaranteeing flexibility and a highly competitive price/performance ratio
  2. A highly effective cluster architecture, perfectly in tune with the applications used and the amount of data being transferred

Flexible, powerful systems
It’s always risky to predict the future, but architectures consisting of clusters of standard servers – which now account for 80% of the world’s most powerful supercomputers – have undeniable advantages that ensure their durability in the supercomputer marketplace compared with other types of architectures, such MPP (Massively Parallel Processing) systems or declining vector systems. Advances in systems integration techniques means that several thousands of servers – i.e. tens of thousands of processing cores – can be brought together to build extremely powerful clusters delivering several hundred Teraflops. And ‘petaflop-scale’ clustering technologies are already being tested by major manufacturers and academic research laboratories.

The main advantage of clusters is their flexibility, which enables the building of systems that are specifically adapted to users’ needs. This is the best way of guaranteeing that performance in real-life situations reaches the level that users expect.

Clusters also offer much greater flexibility when it comes to memory per processor core . While many MPP systems are restricted to very limited memory, with clusters you can choose how much memory is associated with each processor core depending on the applications involved – for example, computer simulation code runs much more efficiently with a larger amount of memory for each core.

Clusters also provide much greater flexibility when it comes to input/output (I/O) management : the I/O structure is completely modular and can therefore be changed depending on the nature of the code – with a larger or smaller number of I/O nodes, connections, etc. Because it is not just the processing power which has to be taken into account, the I/O bandwidth must be appropriate.

Cluster interconnection networks generally offer a higher level of performance . Standard interconnects are used – with no specific links to servers – which makes it much easier to take advantage of the latest technological developments. Also in this area, cluster architecture means you can choose the interconnection network that offers the best price/performance ratio compared to the actual needs for communication between applications.

When it comes to the applications themselves, it takes much less effort to adapt code for clusters of standard servers . In effect, on the one hand they offer superior performance per core compared with MPP systems of a similar power (which therefore require a greater level of application parallelization, because of their lower performance per core). And on the other hand, clusters use a totally standard operating system, which means, for example, there is no need to reprogram application I/Os: a task that is often required with a proprietary system. In scientific computing as in other areas, taking the proprietary route has its constraints in terms of independence and freedom to evolve. That’s one of the major reasons why vector systems, once the pioneers of scientific computing, now represent less than 1% of the world’s most powerful computers.

One last point – probably the most important one from the point of view of a prospective buyer – for the same processing power, clusters of standard servers are the most cost-effective solutions -- including the cost of maintenance – even for extremely large-scale configurations. This is a direct result of using widely-available processors and other components, and of the possibility of fine-tuning the scale of such systems depending on actual needs, which avoids the need for over-sizing.

To ensure that clusters are just as easy to administer as a single system, Bull has developed the Bull Advanced Server (BAS) HPC software: a robust solution based on Linux, providing all the functions needed to manage a cluster – including the interconnection network and storage.

Finally, although it has long been true that the density of clusters left a lot to be desired, the amount of floor space they take up has shrunk back enormously over recent years. Power per cabinet is constantly improving, and within a year it will be better than MPP systems.

To sum up, open cluster architectures offer the best levels of productivity for today’s computing centers, keeping the time spent adapting code to a minimum, while maximizing throughput thanks to excellent I/O and network performance. All this while capitalizing on the development efforts of a large community, and reducing the costs of buying and maintaining your systems.

 

Infinite possibilities
With a cluster architecture, every supercomputer is unique; designed to respond to the customer’s specific needs, particularly depending on the application areas being used. A ‘hybrid’ system can also be put together, which combines different architecture nodes – for example commodity servers and SMP servers – to more effectively cover the needs of a diverse set of applications. This could be of interest, for example, when running large-scale multiphysics or multi-scale simulations. Bull has already designed such hybrid systems for some of its customers, in which two types of servers are managed transparently within the same Bull software environment, and all have access to the same file system.

Aiming for a position of leadership, and delivering petaflop developments
Bull HPC solutions have won over prestigious customers across France and Europe, as well as Asia and South America. Following on from the implementation of the TERA-10 supercomputer for the French Atomic Energy Authority (the CEA) in 2006 - at the time, ranked as the 5 th most powerful supercomputer in the world, and number one in Europe – Bull has proved its worth with major players in industry (Dassault Aviation, Pininfarina…) universities and academic research centers, including recently four of the seven largest universities in Brazil, Cardiff University, the D-Grid network in Germany, and many more. Having been involved in the HPC business for less than three years, we have delivered over 100 supercomputers in 13 different countries. This is a very good illustration of the breakthrough that Bull – as Europe’s only IT manufacturer – is currently making in the world of scientific computing, and just how appropriate the strategic technological choices we made three years ago have proved to be.

Bull is aiming to become the European leader in High-Performance Computing. Which is why we are making significant investments in two key areas:

•  The ‘democratization’ of HPC: processing power – particularly the competitive possibilities opened up by computer simulation applications – should not be the exclusive preserve of the largest organizations. As a result, Bull is offering HPC solutions specially adapted for the SME marketplace and medium-sized research establishments. It was with this in mind that we acquired Serviware, the leading French integrator in the area of HPC, with their extensive list of customers in industry.

•  Our R&D teams are currently working on the new generation of very large-scale supercomputer: so-called ‘petaflop’ machines. In addition to their own projects, our experts are actively contributing to numerous collaborative projects that are preparing the petaflop environments of the future, both hardware and software, in partnership with major industrial companies and university research centers.

Bull, working alongside its partners in the public and private sectors, is shaping the HPC architectures and software of tomorrow, and always with the same concern for openness and accessibility.


SEND TO A FRIEND
Contact  |  Site map  |  Legal  |  Privacy