Power is nothing without control

Nuclear power is all the rage but what will keep it under control in the future?

First there was the announcement by the UK government in January that it wants new nuclear plants to be built to replace the country's23 ageing reactors scheduled to close over the next 20 years or so.

Then, in May, Prime Minister Gordon Brown said the UK needs to increase its nuclear power capacity from the current 20 per cent of the UK's electricity, implying that the government plans to go beyond straight replacements.

According to some analysts there are three reasons for this desire to increase nuclear's share of UK generating capacity. The principal factor is sharply rising oil and gas prices, but there is also the issue of climate change and the need for energy security.

Oil is now past the $145 a barrel mark, about twice what it was 12 months ago and showing no signs of falling. The UK also has an EU commitment to cut its carbon dioxide emissions to 60 per cent of 1990 levels by 2050, and the government believes nuclear energy will help it achieve that. And, as North Sea oil and gas run out, the government does not want to be over-reliant on supplies from, say, the Middle East and Russia.

The new plants will be the first of their kind in the UK since Sizewell B was opened in 1995, which was also notable for incorporating the UK's most extensive integration of digital control technology at the time, something of a milestone in the move away from analogue control systems that began with the Advanced Gas Cooled reactors in the early 1970s.

But while all new plants are designed with digital systems, analogue control still features in many older plants - and will continue to do so for some years, even when upgrades or refits are needed.

"We still use analogue control equipment in many of our nuclear plants," says Paul Tooley, chief control and instrumentation engineer at British Energy, the UK's largest electricity generator by volume and owner of Sizewell and several other nuclear plants on the British mainland. "And we wouldn't necessarily upgrade to digital even if a plant had 10 to 15 years of life left in it - going digital is not a 'silver bullet'. In fact, refits may sometimes use analogue equipment, and that will be a commercial decision because of the challenges involved with digital systems.

Tooley says it is vital to understand fully the safety requirements. "Older power plants may have to fulfil requirements now that differ from those when the plants were originally developed, so you have to make the safety case for digital systems," he explains. "Whichever technology you choose, for a nuclear plant the key issue is the need to make the safety case - everything else flows from that.

"Analogue has an advantage here in that the systems are very analysable and testable; with digital it's more of an issue to demonstrate the safety of the software, because of the nature of software in general," he says.

Software problems

This issue of software safety bedevilled the nuclear industry in the early days. For example, before Sizewell B began commercial operation there was concern over its shutdown system, with some experts saying the software was too complex to verify using formal mathematical methods, while the perception in the nuclear and software communities of secrecy surrounding the results of tests on the software led to scepticism about its overall safety.

So it was against this backdrop of uncertainty about the technology and its verification for nuclear and other high-hazard plants that, in 1998 the IEC published the 61508 international standard on the functional safety of electrical, electronic and programmable electronic safety-related systems. A seminal document, it helped to set out the requirements for ensuring that systems are designed, implemented, operated and maintained to provide the required level of safety integrity.

This was followed in 2001 by the nuclear-sector implementation of IEC 61508, IEC 61513, which itself was followed in 2004 by IEC 61511, on the functional safety of instrumented systems for the process industry.

Despite the publication of such standards, however, it was acknowledged by the various nuclear regulators across western Europe that assessing the software takes more than verification and testing - factors such as the quality of the processes and methods for specifying, designing and coding are also important, and that these and other standards were not providing exhaustive guidance on how to assess those factors.

Potential consequences

A consequence of this was that the licensing approaches varied between countries, creating costly delays in implementing these systems because of difficulties in coordinating their development and qualification. So in 2006-7, nuclear safety regulators and authorised technical support organisations from Belgium, Finland, Germany, Spain, Sweden and the UK produced a consensus report on how the systems should be licensed.

This so-called Seven Party Report (there were two bodies from Germany) specifies more than 20 areas of common ground regarding the licensing of safety software for nuclear plants, and covers generic and lifecycle issues ranging from initial safety demonstration and requirements to commissioning and configuration management.

It's a lengthy and detailed list - understandably, given the need for very high levels of reliability and safety in nuclear plants. Although digital control systems in these plants are designed and built in essentially the same way as those in, say, the chemicals and aircraft industries, they must be relied on to reduce the likelihood of even low-probability events because of the potentially far greater consequences of nuclear accidents.

However, says the Nuclear Installations Inspectorate of the Health and Safety Executive, the level of protection provided by nuclear safety systems is "commensurate with the onerous requirements placed on them".

Bob Jennings, manager of the systems assessment unit in the New Civil Reactor Build Division at the HSE, says: "For example, in many UK nuclear facilities the most onerous initiating events - those that, if not acted on by the safety systems, would lead to a hazardous outcome - have two lines of protection. Each one is often achieved using safety control equipment that employs at least two fully redundant divisions of safety equipment and on some occasions as many as four redundant divisions."

But, he says: "The use of digital control and protection technology does not obviate the need for standard methods of safety assessment for nuclear facilities. If anything, its use tends to increase the complexity of the methods.

"The safety analysis of digital systems is based on a combination of dynamic analysis techniques, of which traditional testing is one part, and static analysis techniques. Static analysis is a very powerful technique which at its most rigorous level treats the digital system as a mathematical entity that can be reasoned about using techniques such as theorem proving," he says.

"In an ideal world, if one could rigorously prove all behavioural properties using mathematical methods then testing would not be necessary. But the complexity of industrial-scale systems are such that static analysis alone is not adequate, and research has shown that a powerful combination of static and dynamic analysis techniques is the best way to achieve a high degree of confidence in the performance of a digital safety system."

Hindered development

Historically, the development of these systems has been hindered by communication barriers between the technical communities and those involved in their implementation.

With work being carried out simultaneously in many areas - each with its own technology, research focus and agenda - common terms have sometimes had different meanings to different groups. This has been a major headache for the nuclear power industry and its regulators, who are not dominant in this technology and who have had to try to distil the information and experience from various sources and apply it in power plants.

But the nuclear industry is not alone in this. As Jennings explains: "While it's true that the nuclear industry worldwide is not a dominant player in the overall control and instrumentation industry, achieving good communications among the diverse experts employed in complex projects, such as the design of a nuclear facility, is an issue throughout the world and in all industrial sectors. The introduction of digital control and protection systems didn't radically change the nature of the problem, it just added to the difficulty.

"There are no simple solutions and much about this topic is reflected in many modern standards," he says. "One of many approaches is the use of a lingua franca based on the rigour of formal mathematical specifications for systems that involve contributions from scientists and engineers from different disciplines. The use of mathematics considerably helps to overcome the problems of ambiguous natural language specifications, and forces experts to share a common understanding by converting specialist jargon and terms into mathematically precise statements."

So clearly there are still issues with the use of digital control systems in nuclear power plants, some of them ongoing but equally clearly not insurmountable. The migration to digital will, of course, continue - British Energy's Tooley, for one, is unambiguous about that - so what's the state of play regarding current hardware and software, and what does the future hold in terms of their development?

According to Tooley, digital upgrades for existing control systems hardware are not bespoke. "British Energy uses equipment that is generally available but qualified for nuclear sector," he says. This is a similar approach to the Commercial Off-The-Shelf systems (COTS) principle used by a growing number of military forces around the world, where vehicles and vessels are fitted with technology developed in the commercial sector and adapted for the military environment."

The aim with COTS in the military is to design and build a "platform" - whether it is an armoured car or a nuclear submarine - such that it can be upgraded throughout its life with new technology as and when it becomes available, without the need for complete and expensive refits. It's not exactly "plug and play"; instead, these new platforms are designed with open systems architecture so that "bundles" of future and therefore as-yet unspecified technology can be fitted during their lifetime.

The underlying reason for this new approach is that modern military equipment has to have a life of 30-40 years - the old way of building for the military is simply no longer affordable, even in the US - and this is roughly comparable to the life of nuclear plants. But while British Energy and other operators do use COTS, Tooley sees only limited take-up of its open systems aspect. He says: "A totally open systems environment may be difficult to achieve because of the mix of technologies and different ages of the equipment."

Even so, with new PWR plants expected to operate for about 60 years, the longevity of control systems in nuclear plants is clearly an issue, although here Tooley says it's more a case of maintaining support from product suppliers. It's conceivable, however, that some suppliers may go out of business during that time, to which Tooley says accounting for that depends on the appropriate management of spares and obsolescence.

As for future technology, Tooley sees one development being in the field of smart sensors, with embedded software and using field programmable gate array technologies. FPGAs, with their post-manufacture facility for reconfigurability, offer some exciting scope in signal processing.

Fusion not fission

However, the story doesn't end here, because a lot of money and work has for some years been going into the next ingredient in the nuclear recipe - fusion. And while no one wants to put money on when we'll see the first commercial fusion reactors (mid-century being a fair guess at the moment) even a cursory knowledge of the fusion process implies that their control systems will need to be quite different.

The centre of research in the UK into fusion energy is at the UKAEA Culham site. It's home to the Joint European Torus (JET), which was designed to carry out fusion experiments in conditions close to those of a commercial reactor. Dr Robert Felton is the real-time measurement and control systems manager in the Plasma Operations Group at Culham and, as he explains: "Fission is an 'avalanche' process so you have to use moderators, which have to be reliable and very failsafe. In a fission plant, the actual control logic is not hugely complicated - but not trivial - or distributed.

"But in a fusion plant, where the plasma may quench and damage the containment, the control systems have to be much more sophisticated and distributed. They need complicated integrated control of large (superconducting) magnets and gas injection, while auxiliary heating needs high-speed data acquisition, signal processing and communications between large and complex main components.

"So for a fusion plant a practical system will need to have a communications and database infrastructure that is available, reliable and maintainable; the plant systems should be autonomous, and standalone and integrated operations should be configurable and traceable with interlocks and alarms. All the basic technologies exist to realise these systems but they need adaptation."

JET has been running since 1983 but since 2000 it has been operated by the EC's European Fusion Development Agreement (EFDA) as part of work on "next generation" devices such as ITER.

The ITER project is intended to demonstrate the essential fusion technologies in an integrated system, and first plasma operation is expected in 2016. But although the control engineering issues are largely known, says Dr Felton, ITER's size (twice the linear dimensions and ten times the volume of JET) and complexity - and the fact that it's an international collaboration - presents its engineers with greater challenges than with present-day fusion machines.

One issue is our old friend, standardisation. "ITER's main components will use considerable ADC, DAC and signal processing," explains Dr Felton. "To operate them separately for commissioning, and collectively for plasma operations, there will need to be a hierarchical real-time command and control infrastructure. In most present-day machines these have grown organically, and there are inconsistencies even in one machine. At the moment there are no common standards for interchange of equipment, codes or data between machines.

"So the great idea is to have a central Control, Data Access and Communication (CODAC) system in ITER, with the main plant components running their own control and data systems, all communicating in real time through common plant-system interfaces," he says.

ITER will not be a commercial reactor, however, these issues are pointers to the way ahead. Dr Jonathan Farthing is head of the CODAS & IT department at Culham - CODAS is the UKAEA's equivalent of CODAC - and he says: "ITER is an intermediate step between what we have at the moment and what we might see in a future fusion power station. Its conceptual design showed that there are solutions for all the functional requirements; the challenge will be to combine them into a single system - and then a simpler and therefore more robust system for a commercial plant."

It should not be too much to expect that, in 30 years or so, we could see our nuclear energy coming from a mixture of fission plants relying on a range of analogue and digital control, and fusion plants using systems yet to be devised. Control engineers of the future will be in for some interesting times.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close