Contents
Editorial
Executive opinion
Hot topics
Business cases
Experts voice
Solutions
Quick Poll
At a glance
Events
PDF version
 

Subscribe to
Bull Direct:

Your email

Archives
May 2006
Experts voice

Partitioning and virtualization: key technologies for server consolidation on Intel environmentss
Interview with Pierre Fumery, Head of Bull’s Linux Competence Centre

Pierre Fumery has made a substantial contribution to the development, in Bull’s R&D arm, of AIX® partitioning and virtualization technologies (and notably to the WLM manager) in partnership with IBM, and then under Linux with XenSource. Today, he is head of Bull’s Linux® Competence Centre.

 
Historically, three major approaches have succeeded one another in server architecture:
• Centralization, with mainframes
• Distributed IT, with the advent of client-server
• Thirdly, something of a return to centralization, coincident with the development of Internet technologies.
Today, this trend is still on the increase with the fourth IT revolution that is currently taking shape: that of the open systems world, supported by the advances in interoperability, and de facto, industry standards.
These changes are bringing thoughts of how to approach consolidation to the forefront of IT directors’ concerns.
Today, two technologies are key to the implementation of new-generation consolidations: partitioning and virtualization.
Bull has been heavily involved in these technological innovations, not only in Linux, but also for Windows and the GCOS server environments.
What are the respective advantages of these technologies on Intel-based environments? What are their limitations? In what scenario or set of circumstances should they be used? This article provides a brief overview of best practice, and trends for the future.

 

Two key consolidation technologies: partitioning and virtualization
In an open world, how well organizations adapt to change is becoming a decisive factor in maintaining competitive advantage. This trend makes heavy demands on information systems, and notably on the data center. An organization can only be as flexible as its information systems. Nonetheless, a multiplicity of systems and disparate applications deployed over the years, built in silos using heterogeneous technologies, often translates into a highly complex operating platform.
Capitalizing on the latest generations of standard hardware, and new technologies, rationalization and consolidation strategies are therefore appearing high on the list of IT Directors’ current priorities.
There are two complementary approaches to consolidation, each of them with specific advantages and capable of adapting to different scenarios: partitioning and virtualization.

Partitioning: dividing the server into distinct machines
The first approach is partitioning, which can be either physical (PPAR) or logical (LPAR).
Physical partitioning (PPAR) ensures total isolation at hardware and electrical level. Using this approach, a large server is effectively transformed into several smaller independent servers. This option is available on the NovaScale servers, for example, which permit distinctions to be made between Linux, Windows or GCOS partitions. This means PPAR is extremely effective where several different operating systems need to be used in parallel, while ensuring a high degree of isolation at hardware level. This is a very powerful solution. The advantage is that you achieve independent servers within a single machine, with a single administration console. On the down side, there is no dynamic optimization of resources between partitions, as a function of changes in workload.
Logical partitioning (LPAR) is achieved solely by using software applications, at the operating system or firmware level (a layer providing the operating system with the vision of the appropriate physical resources, and so enabling more effective isolation). An LPAR at operating system level is a good solution, for example, when the requirement is to consolidate the loading of several applications and a database. The main benefits here are the performance improvements that can be achieved because only one operating system needs to be managed, while at the same time the resources assigned to each requirement are isolated. One could, for example, assign three CPUs to an application server and five to the database, all in a highly optimized way, in conjunction with the operating system scheduler.
This kind of solution is now available with Bull’s newly-launched Dynamic Domain For Applications (DDFA) for NovaScale: the first Linux solution offering the equivalent of the kind of container technologies available for Unix® technologies (Solaris®, HP-UX®). DDFA is particularly suited to the large NUMA (Non-Uniform Memory Access) servers comprising 8 to 16 CPUs. It intelligently optimizes the choice of CPUs to ensure maximum availability while adapting the number of CPUs dynamically as a function of the current loading for each domain.
These partitioning technologies are both powerful and highly effective when it comes to dividing an integral number of resources (processors, etc) on large-scale servers. On the other hand, they have limitations when it comes to achieving a finer distribution by assigning more granular percentages of resources to a higher number of applications, for example for servers that have to manage a large number of applications. It is at this level that virtualization technologies come in, of which there are two major types.

Virtualization: Towards complete virtualization of the notion of ‘a server’
The first technology is that of simple load virtualization on a single operating system. This is the approach offered by an application such as SWsoft Virtuozzo for Linux on NovaScale: there is only one operating system, but each application ‘thinks’ that it is operating on its own dedicated system, with its own IP address, its percentage of CPU and memory. For a simple approach to application consolidation (J2EE applications or Web servers, for example) this solution is very effective, with optimum performance and ease of implementation.
To reinforce the independence of each partition, one might want to run an operating system on independent virtual machines. This involves more advanced mechanisms. Powerful such solutions are beginning to appear in standard Intel environments. Up to now, only software solutions that emulate firmware have been available: VMware and Microsoft Virtual Server. The launch in mid-2006, of Intel’s VT technologies (Virtual Technology, formerly known as ‘Vanderpool’) now opens up new horizons, both on Itanium and x86 platforms. These will effectively enable virtualization to be managed at processor level, and will provide tools for creating virtual processors in firmware, hyperviser administration, and opens up the prospect of combined control of hardware, firmware and software. These technologies will enable future versions of VMware (rewritten to take advantage of these hardware technologies), but above all the new Open Source Xen solution (supported by Bull and the majority of manufacturers) – to achieve a significant leap forward in performance and security, combining the best of virtualization technologies within a totally standard environment!
This is an area in which Bull has considerable involvement through its contributions to Xen within XenSource; particularly in the area of multiprocessor administration. These technologies will be integrated within Novell SuSE Linux (SLES 10, mid-2006) and Red Hat’s Linux (RHEL 5, end 2006 or start of 2007).
Each individual user needs to choose between these technologies in order to achieve the operational configuration that best meets their needs, depending on their application infrastructure.

The example of NovaScale servers: four major technologies available

Partitioning, HA: future technologies in view
Above and beyond these developments, what is the outlook? Historically, Linux has moved away from virtualization towards partitioning. This is not surprising given that Linux players have always tried to be universal from the outset, even if the VT technologies of founders like Intel today enable them to extend their range of solutions by relying more closely on the hardware specifics of standard devices.
As regards the future, there are likely to be three main trends:
• Resurgent interest in logical partitioning. Virtualization is not suitable for every context. It is a very interesting technology, but one that consumes a lot of resources (several parallel operating systems, etc). There are cases where logical partitioning at application level is more effective. Hence the new developments in this area, like DDFA.
• A major R&D drive to reinforce high availability, notably from players like Bull, for which this aspect is essential. For example, this involves reinforcing functionality intrinsic to the Linux kernel. These developments are progressively integrated in large distributions such as RHEL (Red Hat) and SLES (Novell/SuSE)
• The development of administration tools and “Vmotion” type technologies that enable the migration of an instance of application execution – in a virtual machine – from one server to another (available on VMware, this technology will be available on Xen with version 3).

A contribution to the convergence towards recentralized ‘meta-systems‘?
We can see in the development of virtualization, with the advent of architectures like grid computing and metafile systems, that new technologies will enable those who wish to do so to move eventually over to the recentralized meta-system, the components of which will be distributed, but which will offer a unified approach to administration and view of the system.
On the other hand, what we know today as recentralization is quite different from what it was, even in the more recent past. Users no longer want to know exactly what is where: they do, however, want to administer services, not items of hardware. The daily business of administration has, in this way, moved away from the world of the users to a very narrow world of specialized administrators.
There are two main alternatives when it comes to implementing these architectures:
• Blade/rack server farms
• Large-scale servers that can themselves be put into a grid formation
There are notable differences between these two approaches: blade servers offer the simplicity that suits some distributed, easily parallelizable, applications, while large-scale servers offer clear advantages in terms of flexibility (due mainly to partitioning and virtualization) and of RAS (Reliability, Availability, Serviceability), which make it undeniably the back-office solution of choice; but which also open up new horizons for its successful middle office utilization.

As is often the case, it is essential to recognize that the constraints are not so much technological as human: in the wake of the distributed IT era, it is often the case that users will look askance at having to abandon ‘their’ server and depend on centralized server resources!
Partitioning and virtualization technologies, which at the end of the day can in fact preserve a distinct ‘identity’ for every virtual ‘server’, offer an interesting alternative…

Further information can be obtained from Bull’s consolidation white paper, due to appear in the third quarter of 2006. Register now to receive the white paper by typing in your email address:

Your email :

 

Bull also proposes an advanced virtualisation solution under AIX® environment. To get information on it, please click >>

Contact  |  Site map  |  Legal  |  Privacy