CTBTO's data processing capabilities
increase with overhaul of computer
systems and migration to Linux
The International Data Centre (IDC) in Vienna has just completed a challenging five-year project involving an overhaul of its computer systems. This massive endeavour has involved the replacement of all IDC data processing computers with Linux machines (Linux is a full Unix-type operating system) and the associated changes in IDC processing software. The details of the project, which started in mid-2005 and has been carried out in phases, have shifted slightly over time in order to take advantage of the latest technological developments. The project has involved the redesign of the operating hardware, and the migration, or adaptation, of IDC processing applications to this new design.
IDC’s performance improves as a result of new system
The benefits of the migration to Linux and the new hardware are tangible: there has been a significant increase in performance compared to the previous servers; IDC analysts are able to carry out the same amount of work in less time; and the new system provides enough computer processing power for the IDC to catch up quickly with data processing when recovering from outages. The current design allows computational power to be added in an incremental fashion, when needed, rather than replacing existing computers with larger, more powerful ones. This is important for computationally intensive tasks such as data authentication, simulations and atmospheric transport modelling.
Huge volume of data transmitted every day to the IDC
Every day around 10 gigabytes of raw data are transmitted to the IDC from over 250 seismic, hydroacoustic, infrasound and radionuclide monitoring stations around the world. These stations are part of the International Monitoring System (IMS), which will include 337 facilities when complete. The IMS and the IDC are key components of a verification regime being established to ensure compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which bans all nuclear explosions.
The data collected by the IMS are processed as soon as they reach the IDC, with the first automated products released within two hours of the arrival of the raw data. The products, which comprise lists of seismological and acoustic events and radionuclide detections, are then reviewed by analysts at the IDC in order to prepare quality-controlled bulletins. The data are used to register, locate and analyse events, with the emphasis on detecting nuclear explosions.
New hardware required in order to meet IDC’s ever-growing processing needs
As envisaged in the Treaty, the number of monitoring stations transmitting data to the IDC has been increasing on an annual basis. By 2005 it became clear that the IDC’s data processing needs were beginning to exceed the processing abilities provided by the hardware in use at that time. In addition, the hardware was approaching eight years of age and was becoming technically obsolete so needed to be replaced.
Advantages of open source systems
It was therefore decided to migrate the IDC applications software (i.e. data acquisition, forwarding, automatic processing and interactive analysis) to open source systems. The primary example of open source systems is Linux. Based on its reputation for cost-effectiveness, reliability, flexibility, and fast-performance, Linux was selected as the new IDC supported operating system. Linux runs on a variety of hardware platforms from almost every established computer manufacturer. The IDC, and others wanting to make use of the IDC software, now have a vast selection of hardware capabilities and prices to choose from.
IDC application software has been adapted to this new operating environment, and for the most part, this was voluminous but routine conversion work. Some parts of the original system, however, were neither available in source form, nor in Linux versions, and had to be rewritten. Large parts of the radionuclide software fell into this category, so radionuclide processing has been redesigned.
During the course of the migration work, routine code improvements have been made, as well as numerous bug fixes, and some performance issues have also been addressed. The resulting software is more stable and easier to maintain than the original code.