Contents
Editorial
Tribune
Guest contributor
Hot topics
Business cases
Experts voice
At a glance
Quick poll
Events
PDF version
 

Subscribe to
Bull Direct:


 

Archives
N°34  |  February   2009
Solutions

HPC: a vital resource for the oil & gas industry
By Guy Gueritz, Director of Bull’s HPC business development in the oil & gas sector


Photo

It is no surprise that companies operating in the upstream oil and gas sector are historically among the largest industrial users of supercomputers.

These companies have always had especially high requirements when it comes to computing performance and scalability. Against a backdrop of heightened competition and ever-scarcer resources it is likely that their need for High-Performance Computing (HPC) will increase even more, as it provides an effective solution to three main challenges they currently face.

1. Seismic imaging - saving tens of millions of dollars
The main use of HPC in the oil and gas sector is for seismic imaging. This technique, which uses the propagation and reflection of acoustic waves through the subsurface to determine its structure and the possible presence of hydrocarbon accumulations, is used during both the exploration and production phases. Reflected signals are assembled into composite sections or volumes, which are then viewed and analyzed to assess the nature and structure of the subsurface geology and its potential for containing commercial quantities of hydrocarbon deposits. However, the viable oilfields and gas reservoirs of the future are likely to be found at ever greater depths beneath the surface, or in areas that are extremely hard to access. Exploration is therefore becoming increasingly costly: drilling an exploration well in extreme offshore conditions can cost many millions of dollars. Improved seismic imaging provides a clearer view of the probable subsurface structure and improves the chances of drilling a successful exploration well.

Oil companies expect to drill ‘dry holes’ during the exploration phase of a discovery, with a probability rating of one in three being considered excellent. Oil and gas companies are constantly trying to improve the quality of their seismic imaging, so that they can achieve better success rates. More sophisticated algorithms are now being used as HPC platforms become ever more scalable and cost-effective. Despite recent fluctuations in the oil price, which reflect trader sentiment on prevailing economic trends affecting supply and demand, oil prices are expected to remain on average at $70 - $80 per barrel during this year as this commodity becomes more and more scarce. Consequently marginal fields with complex geology, or new discoveries at extreme depths or in locations of difficult access, are now considered as potentially valuable new sources of hydrocarbons. Likewise, improving recovery rates in mature fields has become a priority, where improved knowledge of the structural and dynamic character of the reservoir is critical to effective management and productivity of the asset.

Against this background, with oilfields becoming less and less easy to reach, the oil and gas sector therefore has strong incentives to equip itself with High-Performance Computing solutions, to ensure the accuracy of exploratory searches at extreme depths, and to reduce exploration and field development costs. For some years both oil and gas companies and the suppliers of seismic data have been building up their HPC capabilities using x86 clusters. This has now reached a watershed where certain limits, in terms of physical space, power consumption and turnaround time have been reached. As some seismic contractors have said, a typical goal would be “five times the processing power for twice the cost”. Absolute purchasing cost has to be assessed against the costs of providing increased productivity in terms of physical space, heat dissipation and electrical power. Users have realized that, for seismic imaging at least, new technology is required to achieve greater productivity at lower real cost. For this reason, various accelerator technologies are again being employed, using standard cards originally developed for 3D visualization or gaming, which have been adapted for HPC to accelerate certain parts of the seismic code. This has led to very significant increases in compute power and processing speed, with lower costs of electrical power and physical space. At the same time, new skills in adapting existing codes to run on this hybrid architecture are needed, to be provided by an experienced supplier either as a turnkey solution or through the provision of consulting and training to the client.


The new hybrid supercomputer developed by Bull for the GENCI consortium in France demonstrates Bull’s know-how in this area. To operate at full capacity, systems need to have a technical architecture that is well suited to the programs they will have to run. This requires close collaboration with customers who develop and use their own seismic imaging tools. It is therefore important to establish a viable partnership between the customer developing the algorithms, and the system supplier, such as Bull, providing the underlying system architecture and tools. In addition to their technical skills, the latter must be aware of the customer’s workflow needs and performance priorities, which can only be gained by a close dialogue with the client.

2. Managing reservoirs in real time to optimize their utilization
Oilfields under production have to be managed for optimal hydrocarbon recovery under conditions of considerable uncertainty. Methods of simulating reservoirs’ dynamic behavior, where pressure, temperature, porosity, permeability, water saturation and other factors in the reservoir rock which drive hydrocarbon fluids (or not) towards the producing wells, have been in use for some time. However, these methods have always required some ‘coarsening’ of the geological model in order to fit the computational limits of the simulator’s mathematical model and underlying hardware capacity. Simulations have often been limited to studies around specific wells or to using simplified parameters in order to achieve results in a reasonable timeframe. As HPC systems become more and more capable in terms of their cost, scalability and bandwidth, simulations can be made with multi-million cell models and with a more sophisticated range of hydrocarbon components, within acceptable computing time limits.

Increasingly, reservoir simulation is becoming a major component of real-time reservoir management systems, where the reservoir model is continuously updated with real-time information gained from environmental sensors located in the producing wells. Results along with other information (live video, production and instrumentation data, etc.) are presented using large-screen display technology. Bull’s expertise in building and managing very large HPC systems projects, including the infrastructure, energy efficiencies and secure communications, can be applied to these ‘Digital Oilfield’ control centers, which are increasingly being adopted by major oil and gas companies.

3. Powerful Data Centers and high levels of security
Oil and gas companies have for several years been working towards consolidating their HPC capacities into true IT power plants to better manage their immense data and applications challenges, and to provide a centralized service to their geosciences and engineering users. These users may also be located close to the centers themselves, or distributed around a number of locations. In some multinationals, this may be a global, round-the-clock facility involving the collaboration of different workgroups, individuals and task contexts.

This dream still has some way to go before being fulfilled. Bull’s experience and expertise in building large data centers is embodied in Bull’s Bio Data Center™ solution, which combines scalability with an optimized computing environment. Oil companies’ IT departments need to possess management tools for such an infrastructure, in order to be able to distribute computing capacity wherever it is needed, with maximum availability. This also involves developing secure network infrastructures and tools facilitating user mobility, enabling them to connect to the system when out in the field and to use their own applications and data in total security.

From the most powerful and complex HPC systems right down to globull™, Bull’s mobile secure storage and virtualization device, Bull can offer a wide range of effective solutions for the technical computing needs of this demanding and strategic industry.

Bull will exhibit at Amsterdam ’09 its HPC solutions for the Oil and Gas industry >>

SEND TO A FRIEND
Contact  |  Site map  |  Legal  |  Privacy