Deep under the ice of Antarctica, a telescope is taking shape - not one in the conventional sense of the word, based around optics or a radio dish, but a huge array of optical sensors designed to look for sub-atomic particles called neutrinos, explains E&T.
The telescope that is the IceCube array is encased in ice up to 2.4km below the surface at the South Pole in a 1km cube, hence the name. When building is complete by the end of January 2010, it will be the largest particle detector in the world.
It needs to be so large because of the neutrino's peculiar properties. Often travelling at close to the speed of light, they are, as their name suggests, electrically neutral, so they're not deflected by interstellar magnetic fields, while their tiny mass - even by sub-atomic standards - allows them to pass through any intervening matter, like the Earth, pretty much unhindered.
This makes them extremely difficult to detect, so immense instruments like IceCube are needed to find them in sufficient numbers to trace their origin - which, because they travel in straight lines, can be plotted back with some certainty.
But when they are detected, they give scientists an exceptional means of probing environments that can't usually be observed using other techniques, such as optical and radio telescopes, so they're important for studying the core of the Sun, for example, and supernovae - their principal sources discovered so far - as well as the galactic core of the Milky Way. They're also important in the search for dark matter, which current theories say could account for most of the mass in the observable Universe.
The reason for building this instrument in one of the most inhospitable places on Earth is that Antarctic polar ice is ideal for detecting neutrinos. The South Pole is essentially an enormous glacier up to 3km thick, whose ice is under so much pressure that it has become exceptionally pure and ultra-transparent. By building the array beneath nearly 1.5km of it, the signals from the detectors stand out better from the background noise generated by natural radiation at the surface.
There are three fundamental elements to the telescope - the array of optical sensors, called Digital Optical Modules (DOMs), a network and a data acquisition system (DAQ).
The overall design relies heavily on commercial computing and networking hardware and software from the e-commerce and Web services industry rather than process control. There is also a strong emphasis on the use of commercial Ethernet networking protocols and hardware instead of more traditional bus-based architectures such as VME. This is deliberate, largely because of the difficult maintenance environment at the South Pole.
When a neutrino collides with an atom of ice inside the detector array - called an "event" - it produces a particle called a muon, which in turn produces a flash of blue light known as Cerenkov radiation that typically travels for 100m or so through the otherwise dark ice and is detected by the DOMs along its path. The direction of the muon's travel is the same as that of the neutrino that produced it, and it's this that allows scientists to reconstruct the neutrino's path back to its cosmic source.
The DOMs capture the photons from the light and then digitise them using an onboard ASIC. The array itself consists of 4,800 DOMs set along 80 strings, 60 per string at 17m intervals, each string spaced 125m apart over a square kilometre at depths of 1,400-2,400m below the surface.
Near the centre of the array, however, is a 'deep core' of six additional strings set 2,100-2,450m beneath the surface, spaced at 72m apart with the DOMs at 7m intervals. This is to take advantage of the fact that the ice is especially clear in this region.
Proof of concept
Signals are transmitted to the DAQ such that a photon's time of arrival at a DOM can be determined to within a few nanoseconds. Pairs of DOMs are connected via DOM Hubs on the surface to the DAQ across a 100BaseT network, which also provides power and control to them. Conventional twisted-pair copper cabling was chosen over fibre-optics because of copper's lower cost and reliability problems with using fibre in IceCube's predecessor, AMANDA (Antarctic Muon And Neutrino Detector Array) built as IceCube's proof of concept.
The DOM Hub design is centred on an embedded processor that provides low-level comms between the surface and each DOM, and TCP/IP comms to the string processors (SPs), which take inputs from several DOM Hubs and integrate them for an entire string.
Each SP is connected to a Global Event Trigger processor, which is responsible for generating lists indicating the occurrence of detector-wide events of interest. String coincidences - a collection of 'hits' - are reported by each SP to the processor, which is also connected to a set of processors called Event Builders.
For every muon from a cosmic neutrino seen by IceCube, however, a million more are produced by cosmic rays in the atmosphere above the detector. So IceCube points through the Earth to the skies over the North Pole. This means IceCube is looking for 'upward-moving' neutrinos that have penetrated the Earth from the northern skies, but the telescope also has an array of detectors on the surface, IceTop, that serves as a partial veto for the 'downward-moving' background of showers of muons created by cosmic-ray interactions in the atmosphere above the South Pole.
IceTop consists of 160 tanks of ice, each containing two DOMs. These 320 DOMs plug into DOM Hubs dedicated to IceTop, and the DOM Hubs connect to a dedicated IceTop Global Trigger CPU and Event Builder, with a LAN architecture similar to that for IceCube.
High-energy muons in the showers can penetrate deep into ice, so collisions on trajectories that pass near the deep detectors as well as IceTop light up both sets of detectors. These coincident events are particularly interesting to the scientists because they carry novel information about the properties of the cosmic radiation and the relative abundances of protons, helium and heavier nuclei in it.
When it appears that a detector-wide event has occurred, the Global Event Trigger tells the relevant SPs to tag all hits that occurred near enough to the global trigger time. The SPs send the data for the tagged hits to one of the Event Builder CPUs, which then send fully constructed events to a disk server.
In reality, several types of trigger are used in the detector, explains Dr Gary Hill, senior scientist in the project's deployment team. "The basic trigger is the in-ice one, where an event is triggered if a certain number of DOM hits appear in a certain time window. There are also string triggers, where a certain number of contiguous DOMs on one string are hit in a certain time window.
"The number of DOMs and time windows may differ from season to season depending on the requests of the analysis working groups," he says. "The IceTop array has its own trigger requirement that is independent of the in-ice trigger. When any trigger condition is met, all the DOMs - both in-ice and IceTop - are recorded for that interval."
Data transmitted via NASA
The acquired data is then sent back to the University of Wisconsin, the lead institution on the $271m project, for analysis. The most important data is transmitted via satellite, using NASA's Transfer and Data Relay Satellite System. Deciding which data is sufficiently important is one of the primary functions of the IceCube Laboratory (ICL) on the surface, which houses the computers for the DOM Hubs and the server-class computers for the data acquisition, processing and filtering, data handling and network services - as well, of course, as ancillary equipment and personnel quarters.
The actual hardware includes dual-core, 64-bit CPUs for the servers, 420GB of RAM in the Hub computers and nearly 200 hard drives to give 20TB of data storage. Altogether, the system needs about 25kW of power and is backed up by UPSs to keep all systems going for at least 30 minutes if the generators fail.
Only the most important data is transmitted via satellite because the total amount of raw data collected by IceCube far exceeds the project's daily satellite bandwidth allocation of about 70GB. The rest has to wait to be carried back on tapes to the university after the summer season has begun. Usually only a couple of personnel - the 'winterovers' - stay at the telescope during the Antarctic winter .
The criteria for deciding which data is sent for transmission and which can wait for delivery by tape can be changed. As Dr Mark Krasberg, the project's on-ice calibration/verification lead, explains: "If the satellite capability decreases for some reason, then we would adjust our filtering cuts so that we stayed under whatever new budget limits were being imposed on us.
"This would potentially hurt our ability to do analysis using the satellite-transmitted data alone, and we may have to rely on the full set of data tapes if something like this happened, but that is what we would have to do."
Controlling the transfer of data from the South Pole to the university, as well as archiving data onto tape at the South Pole, is carried out by a suite of software called the Data Movement and Archival Subsystem. It consists of a set of Java applications that run on several computers at the ICL and interact with Java counterparts at the university. The primary application is called SPADE (South Pole Archival and Data Exchange).
Although IceCube won't be complete for another 12 months or so, scientists have been taking data from it since 2005, when the first DOM string was deployed. "The DOMs have already produced lots of useful data," says Dr Krasberg.
"In 2004-5, we deployed one string of 60 DOMs, then eight strings, then 13, then 18, and last year we deployed 19 strings, for a grand total of 59. We have consequently been taking more and more useful data each year, as our detector has grown in size," he says.
"In the beginning, the data was most useful as a tool for debugging the data acquisition and data analysis software. Now we are taking data normally and doing full-blown analyses of it to look for different types of signals.
"The most interesting signals are proving hard to find though - we need more data and a larger detector! But in December 2006, the IceTop DOMs detected particles from a solar flare, which was IceCube's first detection of an extra-terrestrial event," he says.
Unlike other types of telescope, IceCube is not 'aimed' at different points in the heavens. "The detector receives events from the full sky all at once," explains Dr Hill. "The events are reconstructed with good angular resolution, meaning that we know from which direction they came."
Dr Evelyn Malkus, IceCube's outreach coordinator, adds: "The detector accumulates data continuously, and the search for events or sources takes place for the most part after the data is sent to the university.
"At the South Pole we filter the data in the hardware and software as it is accumulated and stored, based on criteria that are broad enough to assure that we will not miss important features, yet selective so that we don't overload our data storage capacity if we kept everything."
All the data-taking, filtering and data transmission is automatic, says Dr Krasberg. "However, there is a lot of hardware associated with the detector, and things break," he says. "So during the winter months there is an automated monitoring system that emails and pages the winterovers when something goes wrong, and they then respond, typically within five minutes, and get the data acquisition running again.
"There are many redundancy features as well," he says. "For example, we expect a small number of DOMs to stop working each year, but there are enough of them that this has very little impact on the effectiveness of the detector. And in the event that the primary data acquisition software cannot run, we have an emergency backup data-taking system for emergencies."
The issue with the DOMs is that, once the detectors are frozen in the ice, they will stay there for the 25,000 years or so it will take for that portion of the ice to migrate to the coast of Antarctica. Signals can be sent to the detectors to change some of their operations, but maintenance and upgrades are impossible once they are in the ice.
There is another issue the project has to contend with at the South Pole but this, counter-intuitively, is one of keeping equipment cool. Its ice cap makes Antarctica the highest continent, on average, and the South Pole station itself sits at an altitude of close to 10,000ft, and this rarefied atmosphere creates problems with overheating of air-cooled electronics.
"We therefore have systems in place to shut down overheating electronics automatically," says Dr Krasberg, "as well as automated monitoring systems to alert the winterovers in the event of a problem.
"Also, the ICL has been designed and laid out to minimise problems from overheating. It also has an air-handling system to keep the data centre cool."
While construction should be finished in January 2011, commissioning and verification, and calibration and full integration with the DAQ will take a further three months. After that, IceCube is expected to deliver data for some time to come.
"The detector could theoretically run for many decades," says Dr Hill. "The limiting factor for long-term operation will be when it is decided not to continue spending money on running the experiment. We expect to obtain something like 10-15 years of funding, with the project reviewed every five years."