Contents
Editorial
Executive opinion
Hot topics
Business cases
Experts voice
Solutions
Quick Poll
Events
At a glance
 

Subscribe to
Bull Direct:

Your email

January 2006
Experts voice
Codename: FAME2
Launched last year, in collaboration with Bull, FAME2 is one of the first projects resulting directly from the French Atomic Energy Authority’s new open systems policy.
Its aim? To develop new servers designed for high-performance computing and multimedia data processing.

The objective of the FAME2 program - launched in 2005 under the sponsorship of Bull and the CEA (the French Atomic Energy Authority) - is to develop a new generation of servers for 2008, specifically designed for high-performance computing (HPC) and multimedia data processing. The acronym FAME (Flexible Architecture for Multiple Environments) denotes an architecture that, as part of its very design, takes into account the key criteria of hardware extensibility (variable numbers of processors, for example) and versatility when it comes to operating system software. The project forms part of the Ter@tec series of initiatives run at the System@tic* competitiveness cluster in the Paris region. It brings together many players from the worlds of research (IRISA, the Universities of Versailles and Évry, the French national telecommunications institute, the École centrale de Paris) and industry: software publishers (ILOG), innovative new companies (RESONATE, CAPS-Entreprise and NewPhenix) and experienced user organizations (Dassault-Aviation and the French petroleum institute).

Why so many partners? Technological evolution developments in HPC are enabling new players to enter the marketplace, but only on the condition that they offer products based on highly competitive technologies and can build up a network of complementary skills. Three characteristics tend to mark these evolutions. The first of these is standardization. Firstly, in software the major code components of processing software are now tending to have a much longer lifespan than the information systems being used.

The major code components of processing software have a longer lifespan than information systems

Hence, the desire to work with a long-lasting environment when it comes to develop processing code. In practice, this has resulted in increasing use of Open Source software, or software products from ISVs which have become established as the de facto industry standard and which can be shared by widespread user and developer communities.

This trend towards standardization also applies to hardware. Using COTS (Components On-The-Shelf) is becoming more and more widespread, and is forcing hardware manufacturers to carry out only developments that are strictly essential for them to bring new products rapidly to market.

A second major technological turning point relates to the high volumes of data being crunched in HPC applications. There has been an understanding that the overall productivity of an IT architecture is directly linked to its capacity to manage the information flows and data volumes resulting from simulations or experiments. Alongside these requirements, there is the emergence of active communities focused in the development of specific middleware for this area. And finally, the third major trend is the integration of multimedia characteristics within high-performance simulation applications.

The FAME2 project will take advantage of this evolution and targets the development of a partitioned memory multiprocessor, which by 2008 will be using the new generation of Intel processors. Given that there is a need to keep a limit on processor speed, in order to conserve energy, one of the challenges for the project is to efficiently achieve extremely high levels of parallel processing within the servers. The use of multi-core processors, incorporating several separate processors in one unit, enables several hundreds of processing units to be housed in a single server. This approach inevitably involves some compromises and optimization, both from the hardware architecture point of view (in the hierarchy and consistency of information held in memory) and in terms of the software architecture (managing parallel processing, operating and development environment).

Moving ahead, step by step

To meet these challenges, the FAME2 project has been organized into four main workstreams. The first is studying the software architecture and will be focusing more specifically on the development of the Linux kernel, the management of independent processing ‘threads’ and generating code for multi-core processors. The second concerns the hardware architecture, and will focus particularly on NUMA (Non-Uniform Memory Access) architectures which regulate access to the various areas of memory, as well as I/O operations. The third workstream will concentrate on data management: in parallel with the well-known HPC from the energy and aeronautical industries, the management of large-scale XML-format databases, and multimedia dataflows have been chosen as being especially representative of emerging applications. Finally, a fourth workstream will involve taking into account the issues involved in integrating the server with the data center and IT security measures.

The project is scheduled to last for 18 months and a number of milestones have been set including: the key points and basic elements of the architecture, the subsequent demonstration of the feasibility and efficiency of a homogenous internal interconnectivity architecture, and the provision of a demonstrator to emulate a server running the most up-to-date microprocessors with distinctive levels of parallel processing. After this, the next stage will be to create a development and optimization environment that takes account of the new hardware architecture, in order to make typical HPC applications available and to prove the server’s effectiveness for these applications. Finally, the last stage will be to offer comprehensive solutions for accessing very large databases.

Claude Camozzi is Vice-president Platform Strategy at Bull.

Pierre Leca is the head of the simulation and information sciences department at the military applications division of the CEA (Atomic Energy Authority), based in Ile-de-France near Paris

Translated from the special edition of "La Recherche" published January 2006.

* Established in 2005, System@tic Paris Region is a ‘competitiveness cluster’ which aims to give the Ile-de-France region just outside Paris a worldwide class capability in complex systems development. It focuses on four main industry sectors: automotive and transport, security/defense, telecommunications and systems design/development.
System@tic cluster Paris Region: www.polelsc.orgpole_logiciel_et_systemes_complexes.php3

NovaScale Servers: www.bull.com/novascale/hpc.html

Bull White Paper: The new challenges of High Performance Computing:
http://www.bull.com/download/ whitepapers/hpc.pdf

 

 

Contact  |  Site map  |  Legal  |  Privacy