Scientific computing (computational science, super computing or high performance computing, henceforth SC) has traditionally been the field of study concerned with constructing mathematical models, simulation, and numerical solution techniques, and using significant computer power and/or large data storage facilities to analyse and solve scientific and engineering problems. Scientists, far from being merely computer scientists, use or develop computer programs that model the systems they study, and run these programs with various sets of inputdata. Typically, these models require massive amounts of calculations and/or data storage, and are oftenexecuted on supercomputers (SMP’s or clusters), which are the research infrastructure for computational scientists in the same way as, for example, telescopes are for astronomers, or ships are for marine biologists. But one major difference in this type of infrastructure is that it does not just support one specific user community, but an ever growing pool of scientific areas. In fact, SC is needed for any type of scientific research where modelling, simulation, and/or analysis of large datasets occur. As computers increasingly permeate society and become ever more powerful, an increasing number of scientific areas are taking to SC. There are numerous examples including: molecular analysis for development of new materials, modelling the population’s economic or social behaviour, analysing globally linked museum data archives, drug design, Earth System Models analysing global climate change, and simulating wind turbulence for optimal design of windmill wings. Though SC is often too expensive for most individual scientists or even for universities, SC infrastructure is becoming an important asset for public and private research.
The world faces scientific challenges that cannot be tackled by a single country alone, but rather by sharing responsibilities, efforts, and resources between many research communities all over the world. For this reason, more open access to data repositories and scientific resources across both thematic and geographical distances is being fostered. A major challenge with respect to this is the data explosion problem: Collecting scientific data has progressed from the time when a scientist scribbled notes on paper to automated data collection, using many inexpensive sensors, which gives rise to a huge explosion in the amount of collected data. Examples are numerous and include meteorological data, seismic data, genetic data, astronomical data, medical imaging, etc. To this must be added the ever growing information archives in the arts and social sciences, which require further large storage and processing facilities. Yet another challenge is how such facilities can be fully utilised. SC, or the somewhat broader Computational Science, is able to deal with this problem, but has itself been significantly complemented in recent years by the new technology of Grid Computing.