Contents
Editorial
Tribune
Hot topics
Business cases
Experts voice
Solutions
At a glance
Quick poll
Events
PDF version
 

Subscribe to
Bull Direct:


 

Archives
N°29  |  September   2008
Experts voice

Meeting the challenges of service quality through virtualization: the AIX example
Christophe Loridan is a solution architect in Bull France’s sales team Gert Prieber is a consultant in Bull’s Products and Systems Business unit

IT Departments are increasingly asked to achieve more, with less. This means they have to optimize resources to deliver a service that is both effective and efficient. An effective service combines both the functional aspects and the quality of the service provided. And when it comes to service quality, industry standards such as ITIL V3 highlight the importance of implementing a policy of continuous improvement.

 

The technological aspects of this kind of approach will succeed by capitalizing on system virtualization technologies, as these meet the basic requirements for continuity, availability and capacity management. So a platform like AIX® offers all the necessary functionality for service quality to become a real factor in value creation. To be sure this happens, there are three golden rules:

    • Implementing a global approach to technical infrastructure
    • Establishing an on-going dialogue between prime contractors and the contracting authority
    • Taking operational issues into account from very beginning of the project.
 

Nowadays both public and private sector organizations are preoccupied with rationalizing their resources and limiting expenditure. This trend applies equally to information systems (IS), and the watchword for today’s IT Department is to do more, with less. Doing more means that today’s information system must contribute to value creation by keeping in step with business strategy to perform the relevant tasks as and when required; with less means that the resources used must, as far as possible, be consolidated, shared and made more reliable. And this is where we run up against the potential contradiction between effectiveness, or the capacity to deliver the service required, and efficiency, which measures the relationship of this capacity to its cost. It is these problems of cost control that are currently driving many businesses to opt for consolidation projects. 

The effectiveness of the information system can be measured in two ways: according to business criteria, but also service quality criteria, which assess the way a service is delivered (for example, the response time for an application). Today’s IT Departments would find it unthinkable not to look at their system from these two points of view: the pressure to innovate means that functionality has to be constantly improved; while competitive pressures and rising energy costs, for example, dictate that infrastructures need to become ever more cost-effective while maintaining optimum levels of service quality.  

ITIL v3 shows the way forward
The recently-released version 3 of ITIL – the industry standard for best practices in IS management – fully acknowledges this development, making a clear distinction between the two component parts of the value of an IT service: on the one hand its ‘fitness for purpose’, and on the other its ‘fitness for use’. So one could say that the whole art of information system governance depends on maintaining a balance between these two aspects of service delivery, in order to satisfy customers needs (both internal and external) while keeping costs to a minimum. This can best be achieved by implementing a structured continuous improvement plan. And it is with this objective in mind that ITIL v3 sets out a series of standards defining the life cycle of the services concerned in five areas: service strategy, service design, service transition, service operation and continual service improvement.
Alignment with good ITIL practice may at first sight appear to be an ambitious goal, and especially for a modest sized organization. But one can nevertheless find inspiration in the vision it offers of an IT organization which can manage a portfolio of services that are systematically subjected to a process of continuous improvement. Service quality in particular – all too often reduced to being just a technical issue left until the end of the project – should really be subject to full management scrutiny: something that is properly negotiated, implemented, controlled and updated in a process involving both the business and the technical functions within the organization. 

Virtualization in the front line
So how can we measure service quality? ITIL identifies the main criteria as being service continuity, capacity management and service availability. System virtualization offers a common technical solution addressing all of these issues, which is why it should appear high on the list of possible solutions when embarking on any methodical service quality management program. It uses an infrastructure-type approach: in other words, it is applicable to a variety of technologies and architectures. Above all, it enables a centralized and global approach. The Storage Area Network (SAN) made it possible to manage storage from a single vantage point, enabling system capacity to be rationalized and optimized just as the amount of data to be managed started to explode. The same kind of situation is starting to happen with servers: just as demand is soaring, computing power is increasing to the extent that the various service quality parameters can be handled in a totally holistic and dynamic way. Implementing a system virtualization infrastructure is logical first step which paves the way towards much easier operational management of service quality. For example, the virtualization of Power® systems in Bull’s Escala® range is proving particularly useful for critical applications, most notably because it provides excellent scalability, no matter how heavy the resource loading it has to manage (CPU, memory, I/O interfaces...).

1. A simplified architecture 
The pre-requisite to controlling service quality is, above all, having control over the infrastructure. And the simpler the architecture, naturally the easier it is to manage. Which is why consolidation projects aiming to reduce the number of servers are of such interest. From this angle, virtualization becomes a fundamental tool, which is even more effective because it can also improve server utilization rates: a critical parameter when we take cost into account. In fact, a server with a 20% utilization rate consumes almost as much energy as a machine being used at full capacity. And the same applies to other fixed infrastructure costs, such as systems administration and software licenses. It could be argued that a server which is busy 20% of the time, is ‘wasting’ 80% of the software licenses it supports (not to mention the hardware resources involved)! As a result, it is fundamentally important to choose a technology that enables high utilization rates. AIX is a very good example of this, as it offers a hypervisor-type virtualization layer administered by firmware, and is therefore capable of handling hardware resources very efficiently.
The hypervisor manages several partitions, each capable of running an AIX or Linux® operating system. It also manages the allocation of CPU and memory resources, and input-output interfaces to the different partitions, as well as controlling how resources are allocated with very high levels of precision (down to 1/100th of a CPU, for example). By exploiting the complementary nature of load profiles for the different partitions, it is possible to achieve utilization rates of over 90%.  

2. Capacity handling
It is always extremely difficult to determine up-front the number of servers that will be needed to achieve a given response time and a given availability rate, and this is even more problematic because the load profile is bound to evolve throughout the lifecycle of the service. Far too often the decision is taken at the design stage to build in a safety margin, and resulting in significant additional expenditure. Virtualization brings a degree of flexibility to production so this margin does not need to be so big, and as a result fewer servers are required. At the same time, it makes it possible to adapt more easily to changes in operating conditions that can sometimes be extreme or exceptional (for example, Web applications that can experience big peaks in visitor numbers).
To achieve this, the chosen technology must permit dynamic handling of changes to load profiles. Once again, AIX fits the bill. Its virtualization system allows definition of both a guaranteed (minimum) level plus an upper limit for CPU and memory used by each partition, or conversely, can stipulate that each partition is free to use all available resources. When used with a priority handling system, this approach will allow the hypervisor to allocate resources dynamically to each partition as a function of need. As a result, systems administration costs can be drastically reduced in an area where, in the past, a full monitoring and accounting system would have been required. AIX also enables load handling in a multi-server context. AIX virtualization allows a partition to be moved, even while executing a task, from one server to another, without affecting the response time for the end user thanks to a sophisticated two-phase memory-to-memory copying mechanism (the partition mobility function). Partition mobility can be used to relieve an overloaded server or, conversely, completely shut down an underused machine.  

3. Increased availability
In order to increase service availability, the first step involves reducing the number of scheduled system downtimes. With the option of live partition mobility, hardware maintenance operations can now be programmed so they do not have any effect on production. Availability is also closely linked to the quality of pre-production procedures used in new versions of applications, and this means optimizing the development, testing and handover environments up front. This brings us back to one of the very first applications for virtualization: thanks to the low provisioning time needed for a virtual server, and to resource optimization, these kinds of drawbacks simply disappear when virtualized environments are implemented.
Another challenge is to achieve better handling of unplanned system downtimes. In the case of AIX, the fact that the hypervisor is integrated into the ‘firmware’ means that error handling, power-on-demand and virtual resource pool management mechanisms are all provided automatically. Finally, because it offers hardware independence, virtualization makes it easier to put in place disaster recovery infrastructures, which can be implemented even on servers with different characteristics from those of production machines.  

Some useful recommendations
As we have seen, virtualization is a technological approach that can resolve the key challenges of service quality. To reap maximum gain from the benefits it offers, it is vital to consider the infrastructure architecture as a global entity: including servers, network and storage. Another important point: the business needs to be actively involved in the strategic thinking around this, from the initial design though into production and at every stage in the program of continuous improvement. Effectively, it is business managers who hold the key to interpreting service quality parameters for each individual business activity. And it is the technical managers, on the other hand, who can estimate the costs involved in favoring one or other of those indicators. The dialogue between the contracting authority and the prime contractor must, therefore, be maintained quite apart from the project itself, possibly under the terms of an internal service contract, so it becomes possible to assess the relevance of a particular service requirement, or evaluate an investment opportunity in relation to the organization’s strategic objectives.
Finally, for the organization to take full advantage of the flexibility that virtualization affords, operational managers must under no circumstances be excluded from the process. In fact, they should really be involved right from the start of the project so maximum benefit can be gained from their technical expertise, with the aim of achieving comprehensive cost control for the service concerned, and for the whole of its lifespan…

SEND TO A FRIEND
Contact  |  Site map  |  Legal  |  Privacy