Contents
Editorial
Executive opinion
Hot topics
Business cases
Experts voice
Solutions
Quick Poll
Events
At a glance
 

Subscribe to
Bull Direct:

Your email

Archives
March 2006
Experts voice
Infrastructure virtualization by Pascal Beyls, Infrastructure Consultant, Bull Product Engineering
• Executive Insight: "The Open Source revolution: Towards a new forge for IT infrastructure"
• White paper: " Deploy a distributed Single Sign-On (SSO) solution that uses your existing directories"

Virtualization, consolidation, power on demand, [micro] partitioning…
All the technologies enable your infrastructure to adapt itself dynamically to operational needs, and so reduce total cost of ownership (TCO) and infrastructure complexity

Pascal Beyls, Infrastructure Consultant, Bull Product Engineering

Currently, around 70 % of corporate IT budgets are used for maintaining existing infrastructures, according to IDC’s recent study entitled “On-demand enterprise and utility computing” .

So the question needs to be asked in these terms: "How can I make my infrastructure more flexible and easier to manage? " On the other hand, real server utilization rates are often less than 40 %. Given that systems have now become very powerful – a dual-core POWER 5 processor can deliver as much as 100,000 tpmC – there is another question that it’s worth asking: "How can I share my resources more effectively? "

One answer to all these questions is virtualization.

The aim of virtualization is to ensure that there is a separation between IT resources (hardware, software…) and their physical implementation. This means computing resources are viewed in such a way that users and applications can easily utilize them, rather than being presented in a way that’s dictated by their implementation, geographic considerations or physical packaging. In other words, it is ‘logical objects’ that are used (CPU processing power, storage, platforms…). These behave like real objects, from the operating system point of view, so IT departments no longer need to be preoccupied with the physical constraints of implementation, but only with the services being demanded to the infrastructure. So the whole thing becomes a kind of ‘black box’, managed and controlled automatically.

For example, for a project manager, the idea of data storage virtualization could be translated as follows: when a new server is installed, all he or she has to do is to ensure that the wall jack sockets connecting to the storage network are there, and to express the requirements to the system administrator in terms of the necessary storage space (in Gigabytes), performance (IO/s), availability (acceptable service downtime), backup policy (backup window, retention times...), and any other specific requirements (disaster recovery protection, duplication, etc.), and everything is managed independent of the server!

The term ‘virtualization’ covers a whole range of techniques applied to every area of IT: servers, operating systems, storage, communications, applications, etc. It’s something that already exists to some extent in current systems (a VLAN is an example), as well as on Escala® servers running AIX™, NovaScale® and Express5800 servers running Linux® (Virtuozzo, Xen) or Windows® (Virtual Server, VMware), storage area networks, backup (StoreWay Virtuo), networks (VLANs, VPNs) and databases (Oracle’s Grid Computing technologies).

It goes from the very simple (RAID disks used in a virtualized array), to the most complex (VMware virtualization in Windows environments). It can be implemented in several different ways: by resource sharing (partitioning) or by resource aggregation, by emulation or by on-line switching (‘out-of-band’ storage virtualization, for example).

So virtualization is a genuine simplification tool, as it reduces the constraints involved with implementing and optimizing resources. It is estimated that it is possible to reduce hardware by up to three times with a fully virtualized architecture. The arguments between the ‘scale up’ approach to systems evolution (ie, moving to a bigger system) and the ‘scale out’ approach (numerous smaller systems) no longer need to subsist.

So achieving effective virtualization will be a key challenge for the years to come.

A typical example of storage virtualization  

Contact  |  Site map  |  Legal  |  Privacy