Ahcène has over 15 years international experience in IT solutions, services and products for security, enterprise and telecommunications infrastructure management. His skills range from consultancy, international pre-sales, to product and marketing management. He has a PhD in computing and telecommunications, and is graduated from ESSEC in International General Management.
Make no mistake: green computing should not be reduced simply to ‘greenwashing’: an unjustified appropriation of the virtues of environmental awareness. But to think of it like this is to risk missing out the real technological breakthroughs that are set to transform the Data Center, as well as to fail to prepare for the much more stringent regulations ahead when it comes to sustainable development.
Just like all other sectors of the economy, the IT industry is now hemmed in by environmental considerations. For decades, server technology was only limited by the infinitely small matter of silicon and storage technologies. And year after year the boundaries were pushed back by ever smaller, faster and more powerful processors and ever higher capacity disks. Today, the IT industry is more aware that it is operating in an environment where there are significant constraints: not least the growing cost of energy, floor space and cooling. Over and above these limitations, increased environmental awareness is bringing with it new challenges: the need to restrict the use of precious natural resources and reduce carbon emissions for the planet, and to ensure greater corporate social responsibility.
So we simply cannot allow electricity consumption by Data Centers to grow unchecked, given that in most major centers of population the power grid is operating at maximum capacity and energy bills are increasingly high (even if they are often hidden away in the detail of organizations’ accounts across the public and private sector alike). Hence the emergence of the idea of ‘Green IT’ or ‘Green Computing’.
What exactly do we mean by ‘Green computing’?
This term covers all kinds of technologies and services that are designed to improve the energy efficiency of servers, including cooling and electricity supply technologies and systems, Data Center architecture (including energy recycling, reduction in and even elimination of hazardous substances, and the use of bio-degradable materials), and the ability to recycle IT hardware, as well as all solutions and best practices that may lead to lower bills for energy and floor space.
In terms of servers, mono-core processor technology has reached its limits. There is hardly any room left to increase processor speed while at the same time maintaining thermal balance. Energy consumption is growing at a faster rate than processor speed. Gordon Haff, an American analyst specializing in IT infrastructures recently confirmed that if processor speed is increased by a factor of two, its energy consumption goes up by a factor of four, or even more.
So processor technology is at the forefront of energy efficiency. The use of multi-core technology aims to improve processing power for the same level of energy consumption. In the light of this, the emergence of quad-core processors is just one illustration of the first steps in this direction. Another emerging aspect of processor technology is the addition of various different modes of operation – such as variations in processor frequency and low-voltage modes – allowing energy consumption to be finely controlled.
Other server elements, such as flash memories and high-output power supplies (delivering output of 92% or more) are also designed to save even more energy.
When it comes to cooling servers, we are seeing the wheel come full circle, with the return of liquid cooling techniques (using water or another liquid less harmful to electronic components): something that mainframe manufacturers have know about for a long time now. Liquid cooling, along with the addition of cooling doors on server racks or miniaturized radiator circuits on critical parts of servers, is well-known as an effective technique that also has a positive impact on the environment. Not only can naturally cold water, which is inexpensive or even free, be used, but once it has been heated up in this process it can be recycled for many different purposes.
When it comes to the Data Center as a whole, virtualization and overall control systems for energy consumption are redefining the architecture and practical ways of operating these facilities. There is a new paradigm in the administration of IT infrastructures: how do you maximize the use of the available resources, automate the process of deploying and operating them, and reduce the amount of power needed to run them, while at the same time guaranteeing the levels of service that users demand. Virtualization is in the front line of energy efficiency, as it provides the ability to concentrate and share processing power. It can result in energy savings of over 70% in environments consisting of a large number of small-scale servers. And with this in mind, we are starting to see the return of the large-scale, powerful enterprise server or ‘mainframe’, as with the latest high-end releases in the Bull Escala® server family, which feature AIX® innovations such as cloning of partitions. Notably, Bull NovaScale® 7000 and 9000 servers have some of the lowest energy costs per transaction on the market, because they combine ‘mainframe’ transaction processing systems – with their remarkable level of efficiency – with architectures based on standard Intel® processors.
The ‘power management’ function, which automatically powers up and powers down servers, also enables reductions in energy consumption; other facilities can activate the movement of loading from one server to another, or control the cooling process according to the thermal topology of the machine room. The latter operate according to the profile of business process activities, and are complemented by high-availability features to ensure that the quality of service demanded by users is delivered at the best possible price (both economic and environmental).
In the computer suite, the floor space (m2) that needs to be powered and secured is a major component of the economic model when it comes to outsourcing and hosting solutions. This is also one of the factors that plays in favor of outsourcing and hosting of information systems: if they do not have access to appropriate, well-used floor space, IT Directors will look for them outside the organization. Variables that can be modified obviously include space optimization solutions that deliver better use of energy, such as optimizing cold-aisle/hot-aisle cooling, equipping secure infrastructures with liquid cooling, using alternative energy resources, and the use of ambient air in some circumstances, and even increasing room temperatures (even a few degrees can represent a significant energy saving). The telecommunications world has been well aware of these practices for a long time, with the use of direct current instead of alternating current, which can deliver significant energy savings in electrical power supply chains.
Is software outside the scope of this issue?
At first glance one is tempted to say “yes”. But that would be a hasty answer. Effectively it is application software that actually consumes computing power. By optimizing that software, you can reduce the use of system resources (CPU, memory, disk space...) and, as a result, reduce energy consumption. So we are talking about some software being more efficient than others – to create a clear ‘green’ differentiator, even in software. When it comes to implementing an energy management policy for the Data Center, will application software just be a passive ‘recipient’ like any other external constraint or will it be a positive means of delivering that policy?
Another thing that has an indirect impact on software is virtualization. An autonomous piece of software on a dedicated server, with a measurable energy usage profile is a perfect candidate for optimized virtualization. On the other hand, chains of business applications, in distributed architectures using interdependent resources, may not be able to take full advantage of all the benefits of virtualization. One would need to consolidate the architecture of these applications in order to adapt them to the constraints of virtualization, or the latter will evolve to integrate the topology of distributed computing, much as the role of middleware has evolved.
The Bio Data Center™
So ‘Green computing’ is about much more than just turning servers off at the weekend or making a few small technological tweaks. Each of the technological advances outlined above – whether at the level of the individual server, the whole Data Center or the computer suite – would result in a moderate reduction in the overall energy ‘envelope’.
But drastic reductions – such as those envisaged by the US Environmental Protection Agency (EPA)1 as a result of implementing the most highly optimized scenarios in its most recent report on this subject – will involve implementing a combination of these technologies, along with increasingly efficient business processes, all as part of an overall approach to continuous optimization. Such an approach will guarantee real economies of scale, but will also bring about new demands. This will involve putting in place a very well managed energy policy with clear metrics, built around best-of-breed technologies, best practice and tools to ensure effective and efficient control.
This is just the kind of approach that Bull offers, with the Bio Data Center™ concept
The priority is to put the search for energy efficiency at the heart of technological choices, operational processes and economic policies for the Data Center. The vision of the Bio Data Center is a pragmatic approach to energy efficiency, with great integrity, based around four best practices:
- A strategy for making technology choices, to build a portfolio of servers and storage solutions offering the best performance/watt ratio
- End-to-end control consolidation and virtualization solutions, including the various stages of analysis, planning, implementation and operational optimization
- The instrumentation needed to control and automate management operations for power consumption and system loading mobility
- An up-front audit of the energy profile.
The first stage obviously involves carrying out an energy audit, to measure just how efficient the IT infrastructure currently is, identify areas of waste and critical areas of thermal imbalance, uncover all latent opportunities, and develop an approach that leads to a level of efficiency in line with the best standards in the profession, in order to meet both regulatory requirements and environmental imperatives. In other words, the aim is to develop a pragmatic action plan that delivers achievable and visible energy saving objectives, in line with the IT Department’s budget.
- Green computing. Driving intelligent
- Bio Data Center, concepts and directions