Contents
Editorial
7i program
Tribune
Guest contributors
Hot topics
Business cases
Experts voice
Solutions
Quizz
At a glance
Events
PDF version
 

Subscribe to
Bull Direct:


 

Archives
n°19  |  October   2007
Executive opinion

The advent of the ‘Bio Data Center’: rediscovering the path to agility
By Philippe Miltin, General Manager, Bull Products and Systems

Most organizations – whether they operate in the public or private sector – are aiming to optimize their data centers: making them simpler, less costly to operate, more flexible and more in tune with sustainable principles. The challenge? To improve data center performance and agility, as well as the capacity to adapt to new markets or rapidly bring new services on line for customers or citizens.
The complexity of today’s data centers and their ability to interact with other ecosystems, even the planet itself, means they are genuinely ‘living’ systems. We’re at the dawn of a new era, the age of the ‘Bio Data Center’.

Like any living system, the data center needs to be managed globally and coherently. And like other living systems it consists of many interconnected sub-systems, each with its own level of independence and its own specific needs. More than that, data centers interact with the environment with their high consumption of electricity, heat output and product recycling issues. That’s why we believe it is so important to take all these issues into account and look at every aspect of the data center when planning its evolution.
We recommend taking a pragmatic approach, based on our extensive experience of complex IT architectures and of infrastructures combining mainframes and open systems, as well as on the lessons we’re learning from the vast ‘tera-infrastructures’ which are at the heart of today’s largest systems like Google and the CEA’s TERA-101 .


The path to simplify IT infrastructures: centralization or distribution?
The Internet maturity with the progress of the Web 2.0 leads to the distribution of computing resources and processes at a level we’ve never seen before, across the whole planet. At the same time, economic globalization is resulting in significant waves of consolidation, driven in part by the constant search for economies of scale.
Of course, it’s vital to simplify IT infrastructures. Technologies for consolidation and virtualization are now mature, and proven in practice. But everyone knows it is impossible to consolidate and centralize absolutely everything, because there is a certain level of heterogeneity beyond which you cannot progress. And because diversity – when it’s under control – can be very productive: your IT doesn’t have to be all black or purple or blue... it can be a rainbow of colors.
For all these reasons, we believe there are three essential conditions for ensuring that data centers perform to their full potential:

  • Implementing effective operational processes that ensure users’ service requirements are met (through SLAs2 )
  • Optimizing the topology of the data center, to maximize performance and flexibility
  • Managing heterogeneous components to turn diversity into a real asset.

Effective operational processes to meet SLA commitments
The world’s best-known Internet sites, like Amazon and Google, are built around vast infrastructures. Another example of a massive system, which our engineers are very familiar with, is TERA-10, the supercomputer designed and implemented by Bull for the French Atomic Energy Authority, the CEA. It features over 600 servers, almost 10,000 processors, 6 petabytes3 of storage, and over 55 miles (90 kilometers) of cabling. TERA-10 consumes as much electricity as a town with 3,000 inhabitants, and every three days it generates data that’s equivalent in volume to the National Library of France! That data has to be analyzed, stored and used for later simulation exercises. It’s an extremely complex configuration, and every one of the hundreds of thousands of system components (both hardware and software) needs to be constantly monitored. What does that mean in practice? Just how do you manage a cluster of 600 servers? Bull has designed a solution to ensure that it takes no more than 15 minutes to start all 600 machines.

Over and above their pure performance, one of the main lessons we can learn from these kinds of IT architectures is an understanding of how to effectively manage the thousands of hardware and software components that go to make them up, to the point where even the largest of systems can be controlled from a single screen, just like a simple basic server.
Drawing in particular on our experience with TERA-10, we have developed a large number of components that we have started to incorporate into our offerings, to enable the administration of complex infrastructures. We are developing with our partners innovative products to help simplify your day-to-day processes, IT security, service continuity, business recovery, and monitoring. Combined with our proven methodologies and High-Availability Center, these tools can be very usefully used to strengthen and support your service level agreements (SLAs).

Optimizing data center topology to maximize performance and flexibility
Nowadays we all know that a significant proportion of data center resources is under-utilized, even if some applications – conversely – don’t have enough. This seems to be a universal problem: but it can’t go on like this.
Simplifying and optimizing data center topology also involves consolidation and virtualization. This means fewer geographic locations are needed, and the number of servers and storage systems can be optimized. These moves result in fewer IT resources being used, and deliver significant benefits in terms of lower power consumption and heat dissipation. In many ways it’s a harmonious marriage between cost saving and sustainability, and it will be a major feature of tomorrow’s ‘Bio Data Centers’. We have developed a consolidation and optimization methodology designed specifically for heterogeneous environments. In addition, our servers feature powerful virtualization technologies to optimize resource utilization.

Managing heterogeneous environments to turn diversity into a real asset
Heterogeneous infrastructures are everywhere; what’s important is managing them effectively. As ‘Architect of an Open World’, we believe that the future transformation of every data center should take full advantage of its diversity. As one of the only makers in the global IT market with real expertise in mainframes as well as Unix, Linux and Windows servers, Bull has all the skills needed to turn the diversity of your infrastructure into a real asset for your enterprise.
That also means we know how to integrate SMP, cluster and blade servers into the IT infrastructure, enabling a single server to support multiple environments, and offering pre-integrated and optimized ‘plug-and-play’ servers, such as those we have recently launched.
Our on-going developments in application middleware and Open Source are all pointing in this direction, as part of our strategy of alliances with major partners and our involvement in international consortiums. It’s also the driving force behind our developments in the areas of IT security, systems administration and data flow management systems for heterogeneous storage environments.

Every era in the history of IT brings new answers
In the medium term, the data center will almost certainly undergo an even more profound transformation. Legal constraints, security problems, issues related to sustainability, ever greater globalization, technological advances… all pose serious challenges, now and in the future. It won’t just be a question of switching to fluids to cool your servers. Data centers will need to be totally re-engineered! Now is the time to start ensuring that future plans take all the dimensions we have explored into account, because they could well affect the choices you are making today.

Our vision of the ‘Bio Data Center’, combining homogeneous poles of technology and vast areas of heterogeneous systems in complete harmony with the environment, comes from the real world. It is rooted in our expertise of complex architectures, very large-scale IT systems and tera-architectures combined with our vast experience helping customers to transform their infrastructure. We’re ready to help you reap the benefits of that knowledge.

1 Tera-10: the most powerful supercomputer ever designed in Europe, operated by the French Atomic Energy Authority (the CEA)

2 SLA: Service Level Agreement, setting out the requirements of the business

3 A petabyte = 1015, or a million billion bytes

SEND TO A FRIEND
Contact  |  Site map  |  Legal  |  Privacy