Contents
Editorial
7i program
Tribune
Guest contributors
Hot topics
Business cases
Solutions
Quick Poll
At a glance
Events
PDF version
 

Subscribe to
Bull Direct:


 

Archives
May 2007
Executive opinion

Everyone’s entitled to power, innovation within everyone’s reach
Benoît Hallez, Director of Bull HPC Business Unit

Innovation is key to our future. High-Performance Computing (HPC) is undergoing a democratic revolution, and is finally becoming widely available.

The capacity for innovation is a crucial and universal issue that concerns business decision makers in most industry sectors because it is at the heart of performance and competitive positioning for public and private sector bodies and research centers alike. Its main driving force comes from the information technologies which not only provide tools, but also the kinds of new collaborative innovation development models that we are increasingly see appearing in our day-to-day lives, from Web 2.0 and Open Source software, to co-operative publishing tools.
A number of challenges still remain, however, the main ones being how to access and master these tools most effectively.

Race for power
A critical feature of this decade, and of the digital society more generally, is the race for ever more computing power. Until now, technology itself has been the limiting factor when it comes to how much power we can deploy. Now it seems the technological obstacles are falling away, one by one.
Here are just some examples from the world of high-performance computing – which incidentally is no longer the exclusive preserve of the engineering industry – to illustrate the potential for innovation that these technologies offer: in particular, shorter development lifecycles and considerable cost savings. In just ten years, digital simulation has led to a reduction in the design timescale for a new vehicle from five to two years. Between two Formula1 championships, the design of a car can be completely recalculated and a new model launched. This kind of computing power allows real-life simulation with an unprecedented level of precision. No longer applicable just to components, but to entire systems: whether these are destined for the defense sector or nuclear simulation, automotive simulation, aeronautics, earth sciences (climatology, meteorology, geophysics) or life sciences. In the latter domain, having studied the human genome, they are now moving on to the study of the ‘physiome’, that is, the functioning of a complete sub-system of the human body such as the heart, lung or kidney.
In the automobile or aeronautical industries, crash simulations are infinitely less expensive than real-life trials. But this kind of computing power is also of interest to other sectors such as banking and insurance, where pure financial calculations are needed to measure and manage risk, using mathematical models. Simulation is therefore becoming a ‘real-time’ tool, with immediate uses for example in stock trading, where it can be used to support decision-making on investments. We are also starting to see it being applied, for example, in production optimization (one manufacturer of potato crisps has saved millions of dollars by optimizing their shape so that as many as possible can be packaged in the shortest possible time, without them being damaged in the process…) and in the world of sport, where it is being used to optimize the design of equipment used by top sportsmen and women. Not to mention Business Intelligence, which relies on XML databases of several petabytes that also require phenomenal processing power.

So computing power can not only boost organizations’ capacity for innovation, but also their responsiveness and time-to-market capabilities. Given what is at stake in terms of the economy, industry and at government level, this power still has to be accessible in terms of cost; it is yet another basic economic constraint affecting world competition, and a daily challenge for our customers. A major pre-occupation for us, as the European IT manufacturer, is to anticipate this economic dimension on behalf our customers and help them access new technologies at optimum cost. So what does the future hold?

A new technological revolution…
Today it’s no longer increasing processor frequency that drives the increase of the power of computers, but the number of processor cores on a single chip. The current decade will be characterized by the increased density of ‘multi-core’ architectures, following the advent of clusters between the 1990s and up to the year 2000, which had themselves displaced the vectorial systems of the 1970s and 80s. And it is through this new technological revolution that Moore’s famous law will continue to apply, and meet the requirements of the new generation of applications. Meanwhile, there are still some tricky challenges ahead, which only a small number of IT manufacturers the world over are capable of tackling, given the high degree of specialization required to deal with them.

The first, ‘eco-technological’ challenge is focused on electrical consumption and cooling for the thousands of processors that go to make up these veritable ‘IT power stations’. Just to illustrate the significance of this: Google is apparently considering moving its ‘IT farm’, the largest in the world, within the Arctic Circle; it is anticipated that it will be consuming 70 Megawatts2 by the year 2020, equivalent to a tenth of a nuclear reactor!
This explains why we will be concentrating on applying our particular expertise in platform density and packaging, as well as in cooling and energy management and optimization for the petaflop3 capacity machines on which we are currently working. These will offer much higher granularity than current generations of servers, with 32- or even 64-core processors, compared with today’s four-core models, with much better performance/Watt ratios. The use of dedicated accelerators will also deliver processing speeds between ten and a hundred times faster than today, while consuming between two and five times less electricity.

The second challenge concerns parallelization tools which will make it possible for applications to take advantage of these new architectures and the very large number of cores on which they are built. Because modifying the billions of lines of code in either bespoke or off-the-shelf software and applications to take these new architectures into account would be prohibitively expensive.
The solution includes the development of middleware that will virtualize the parallelization. We are working with ISVs and the scientific community to develop these virtual parallelization tools. This technology is of considerable importance because the middleware will render these architectures transparent to applications themselves, and will therefore enable them to reap the benefits of the huge amount of power these machines are capable of generating.

The third challenge is related to mastering complexity, through the development of systems administration tools for tomorrow’s petaflop machines that will feature anything from 500 to 1,000 cores in a single rack.
We are well advanced in this area, having made our first successful foray with TERA-10, one of the most powerful supercomputers in the world featuring 625 nodes that has already been in production at the French Atomic Energy Authority, the CEA, for more than a year now. We have developed unique expertise, and will be capitalizing on this as we move forward to develop the next generations of supercomputers, capable of managing vast file systems by optimizing the Lustre Open Source solution for large clusters, as well as administration of both the system and its components using NovaScale Master which has been adapted to the world of HPC together with a particularly powerful configuration manager.
Finally, over and above supercomputers running under Linux®, we have a range of turnkey systems operating in a Microsoft environment that are very simple to implement and run.

… and a huge evolution in usage
Finally, we are going to meet the increasing democratization of HPC, not just by integrating standard hardware and software – one of Bull’s key strategic choices – but also in terms of usage, via a service-led approach. A service that is accessible to everyone, since all sectors without exception need power, as we have already seen. This type of approach can deliver high levels of flexibility to meet the challenges posed by our digital society. We talk about the ‘agile’ enterprise, and this term seems to me to be especially accurate.
And in the world of HPC, Bull has shown an uncommon degree of agility. There are very few enterprises that could have in a single leap moved up from 229th to 5th position in the TOP500 listing the world’s most powerful supercomputers. This was our achievement with TERA-10, and it demonstrated our exceptional capacity to mobilize Bull talent. Our know-how and our key differentiators are, and will in the future be, directed increasingly towards the area of high-performance computing architecture and integration quality for all hardware and software components involved, as well as to maintaining the quality of our teams supporting customers in their projects to optimize their applications on our servers. Because service is also part of Bull’s culture, a culture of commitment to high levels of service delivery.
As ‘Architect of an Open World’, Bull is now in perfect step with the major trends of the decade. As the only European IT manufacturer, the Group is a major name in IT, and has an excellent record for technical expertise and a long history of technological innovations: from multiprocessor systems to smartcards, and including a range of especially powerful encryption tools. Bull has deftly navigated the tide of technological advance and grown its products and services appropriately, while consistently meeting the evolving needs of its customers and exploiting new technologies as they emerge. Today, we are helping a large number of industrial and academic research centers in their approaches to innovation, including: Alenia, the CCRT (Centre de Calcul Recherche et Technologie), Dassault Aviation, the French Atomic Energy Authority (CEA), Pininfarina, the UK’s National Oceanographic Centre at Southampton, the Universities of Hanover, Manchester, Reims and Valencia, and Miracle Machines in Singapore, among others. These assets are testimony to the Group’s emerging position as market leader in the HPC domain in Europe.

Everyone's entitled to power, choose Bull and fly the Bull flag!

2 Megawatt (MW): one million Watts
3 Petaflop: one million billion operations per second

Contact  |  Site map  |  Legal  |  Privacy