The imperatives of service continuity and security, and how Bull supplies them in its hosting and outsourcing centers
Interview with Philippe Pauty, Director of the Bull Data Center, Trélazé, France.
Bull’s ‘Data Centers’ are at the heart of Bull’s outsourcing business. How do you perceive the developing needs of your customers in relation to these infrastructures?
IT Directors need to manage increasingly large volumes of data, because information systems are becoming more open, new services are being rolled out or new regulations need to be taken into account. This in turn leads them to instigate many projects to update, modernize and transform their Information Systems.
To achieve the fast response times required by businesses, they must be able to satisfy ever more demanding service requirements from operational management in terms of high availability for critical applications, rapid access to data and service continuity. These are absolute imperatives. But at the same time IT Directors also have to manage the risks that threaten the security and availability of these applications. To do this they must implement strategies that ensure they have total visibility of the IS, along with faultless security. This is achieved, among other things, through strategies for backing-up and replicating data between remote sites, and these sometimes have to overcome major constraints inherent in the nature of the business, such as the distance between the production and disaster recovery sites (which can be over 300km for some businesses).
The increasingly widespread availability of high-density servers is having an immediate impact on Data Centers. Some of them are literally saturated, or soon will be, and the density of IT infrastructures is fast becoming a major concern, with a sharp increase in problems linked to power consumption and thermal dissipation. The original design of many Data Centers simply no longer allows making them evolve. But the consequences are far-reaching in terms of costs, maintenance of normal operating conditions, and IT security!
Should we be talking about a new generation of Data Centers?
Yes. Because customers’ needs and technologies are evolving so quickly. And because IT infrastructures are evolving so rapidly, it means that hardware resources have to be optimized, the administration of security tools needs to be simplified, and above all, infrastructures need to be rationalized. The multiplicity of systems and new versions of applications is fast leading to very complex operating environments, as well as a total lack of flexibility: which is just what the business doesn’t need! That’s why organizations now expect to benefit from technical advances, with demands being made on service providers like Bull for a high level of expertise in new technologies like virtualization that are very complex to deploy. Or they may expect to capitalize on automated and professional service processes, especially to deliver effective service continuity or high levels of availability.
One last point: while today’s changes are no longer governed purely by the need to reduce costs, budgetary control nevertheless remains a determining factor within any strategy that businesses may adopt.
How have you taken these major developments into account in your Data Centers?
Firstly, we have implemented an information systems ‘urbanization’ methodology (to create a cohesive set of systems), and have revised our methods for tackling thermal dissipation in response to the constraints relating to high density. Energy management is at the heart of our concerns. Power consumption is one of the main factors affecting Data Center running costs, and this trend is set to continue.
This is an area where we can offer considerable added value. We regularly restructure our own Data Centers, and share our expertise in this field in order to offer optimized floor space. This is the result of long-term planning, our objective being to anticipate growing demands in the hosting business.
As a result, we have created separate zones for handling the problems of high and low density in different ways; while high density requires specific resources, classical configurations still meet the requirements of standard hosting. It is clear that a business-critical application does not require the same hosting resources as a development environment. This kind of flexibility is essential when it comes to adapting our offering to suit individual customers’ hosting requirements.
Another major development is that we have set up a new high-speed network to operate between two of our sites located more than 300km apart via the black fiber optic network supplied by our subsidiary Agarik, specialist in hosting critical Web applications. Thanks to the integration of DWDM (Dense Wavelength Division Multiplexing) technologies, the theoretical maximum speed will be anything up to 400 gigabits per second. Finally, thanks to the ‘dual building’ design used for our main hosting and outsourcing site, we can also offer synchronous replication to fulfill more specific IT security requirements.
Finally, we are constantly developing our storage, virtualization and Open Source tools, not just so we can exploit information technologies more effectively, but also so we can ensure our resources are automated as far as possible, and propose optimum solutions in terms of cost and service quality.
Our priority is to offer true granularity when it comes to our offerings, thanks to the flexibility of our infrastructures, and to provide the best levels of services and IT security at a reasonable cost. Today, our Data Center sites are equipped to meet the highest demands in the industry. This kind of hosting capacity is in great demand!