To get more peace of mind, military users are turning to formally proven software to save operating systems from themselves.
Colonel Bud Jones is not having a quiet retirement. When he stepped down from a command role in the US Army, he moved over to the US Central Command to take part in a programme that will see the US military overhaul its computing and communications infrastructure.
As subject matter expert for US Central Command, Jones is trying to help prevent the electronic sprawl that now dogs military operations. The armed forces have to deal with much more complex situations than before; for example, in Afghanistan and Iraq, the US is fighting alongside and sharing information with coalition troops that were on the opposing side not so long ago. "Is the enemy of yesterday a friend today?" asks Jones.
Many of the partners in a conflict want to share data, but every nation will inevitably deem some data too sensitive to share. As a result, a number of machines will sit on one network with a lower level of security while others - classed as secret or top secret - are attached to a much more secure network.
It's not possible to hack into one system from another, simply because there is no connection. The 'airgap' is the most advanced security measure today's armed forces use to ensure that a break-in on one networked system does not affect another. The problem is that the same people have to be able to use both types of system, and they may be faced with as many as five different computers - none of which are physically connected - in a working day.
The users are required to log off one system, walk to another - which may be in a different tent or office - and then log on to it.
"Today's network frustrates people because they have to get from one network to another. They have to remember all their different passwords, and we force them to change their passwords regularly," says Jones.
The problem is not just one of inconvenience to individual operators. There is the time lost in getting vital information, such as troop movements, from one coalition partner to another. And then there are the logistics. If you have to ship four times the number of computers, network switches and cabling to a forward position, you are tying up helicopter and cargo-plane capacity that could be used for food, ammunition and other supplies. As a result, the problem of moving IT equipment around slows operations down.
"What we are looking for is a single infrastructure connected to a single wire and to remove the airgap," claims Jones. "We want information where it is needed to give access to coalition data from their desktops. What today is physically separate we want to put together virtually.
"What we are doing today is expensive. If we can consolidate to a single box and wire, we can save a lot of money."
The answer for Jones and his colleagues looks as though it will come from the world of embedded systems, an environment that uses much smaller code bases than those used in desktop IT. This in turn makes it possible to meticulously check the software for holes and even formally verify. On their least secure systems, the military may well still run conventional operating systems, such as Microsoft Windows.
"At the Department of Defense, we say we have to have Microsoft and we are not going to change," argues Jones.
But, underneath, a kernel monitors access and ensures that the Windows portion does not see the applications running in a dedicated, secure part of the system. It's taken a while to get to this stage.
John Rushby, then working at the University of Newcastle-upon-Tyne, introduced the idea of a separation kernel for improving computer security in a 1981 paper presented at the Association of Computing Machinery's Eighth Symposium on Operating System Principles. The paper argued that a formally verified kernel "is widely considered to offer the most promising basis for the construction of truly secure computer systems, at least in the short term".
In practice, 'short term' meant more than 20 years. But, in the early 1980s, Rushby recognised there was a problem with traditional methods of enforcing computer security using standard operating-system designs. "Current approaches to kernel design and verification developed out of concern for the problem of providing multilevel secure operation on general-purpose multiuser systems - whereas many of the present-day applications which require some form of guaranteed security are special-purpose, single-function systems," Rushby wrote. "Attempts to support these applications on a conventional kernel have led to systems of considerable complexity whose verification presents difficulties that are quite at variance with the evident simplicity of the task which the system is intended to perform."
The answer is to put a security kernel in charge and run a bunch of operating systems on top, each with its own defined purpose and security clearance. No application in one domain can talk to one running in another domain without going through the separation kernel, unless the system is designed to allow that kind of communications. The approach is not dissimilar to virtualisation, in which guest operating systems run under a hypervisor that controls access to the hardware. Operating system vendor Lynuxworks plans to have its own hypervisor certified as a separation kernel, adding functions to perform the security checks to messages that pass between domains.
For its part, software supplier Green Hill is using a certified version of its Integrity-178B kernel, used in aerospace systems, as a separation kernel.
Less secure operating systems will run on top of the kernel each in separate spaces managed by an emulation layer: the Padded Cell software in Green Hills' case. Lynuxworks claims using a hypervisor will provide higher performance than emulation. However, proving secure operation will involve detailed knowledge of how the virtualisation features inside the microprocessor work.
The search for EAL 7
With operating systems running under emulation or in virtualised partitions, instead of having to separate those systems with an airgap or dedicated secure-networking equipment, it becomes possible to collapse everything onto one piece of hardware. In principle, it will solve one of Jones's biggest headaches. As Rushby wrote in 1981: "The purpose of a security kernel is simply to allow such a 'distributed' system to actually run within a single processor."
A version of the Integrity-178B operating system running on a PowerPC processor was certified by the US government's National Information Assurance Partnership (NIAP) - organised by the National Institute of Standards and Technology (NIST) and the National Security Agency (NSA) - to Evaluation Assurance Level 6+ last autumn. The previous record holder was the STOP operating system sold by BAE Systems, which was certified to level 5.
Some vendors have claimed to have operating systems that they believe could be certified to the highest level, 7, but none has done so as yet. Founded in 2001, Aesec bought an operating system from Gemini Computers that is meant to be able to pass EAL7, but no one has taken it that far yet. LynuxWorks hopes to be the first with an EAL7 certification for its LynxSecure product, but has yet to begin the evaluation procedure. Wind River claimed to have started to work on getting its Safety Critical ARINC 653 product certified to EAL 7 in 2004 in conjunction with Smiths Aerospace. However, the product has not yet entered evaluation since the announcement was made almost five years ago and Wind River declined to comment on whether it had made any progress towards launching an evaluation.
Steve Blackman, director of business development for aerospace and defence at Lynuxworks, says the company expects start evaluation this year, possibly on more than one project. He claims the product is ready but that the way in which the certification process is set up means that a vendor cannot take a product through on its own. To be able to certify above EAL 4, NIAP has to enlist help from NSA experts - a resource that can only be justified if the agencies are satisfied that the project is worthwhile. In practice, it means having a sponsor.
"The way it works in the US, to be evaluated you have to be sponsored by a programme. You used to be to able to just submit yourself. But we have a few programmes taking us through the evaluation process," Blackman explains.
As the first company to succeed in getting an operating system approved to one of the highest security standards possible, Green Hills aims to make the most of its position. The company is so confident that the idea will prove fundamental to IT security among commercial users that Green Hills has created a subsidiary to sell to organisations such as banks and industrial users.
Blackman says he has doubts about the applicability of EAL 6+ or 7-rated software outside the military and avionics areas. "The requirements are very stringent and the constraints on using the system are very high. They are more stringent the higher you go up and that puts restrictions on your systems. Can you open a door with just one key or do you need a key and four padlocks? High assurance is possible in a highly restricted world."
Dan O'Dowd, president of Green Hills, asks rhetorically: "Is EAL 6+ too secure? We can dial in whatever level of security is required."
By letting people run their existing operating systems, such as Windows, but prevent them from accessing sensitive areas on the same machine, Green Hills hopes it can encourage people to gradually harden their systems. Over time, they might shift critical routines, such as Secure Sockets Layer (SSL) encryption for web services, kept in a less-secure operating system to be managed by the secure kernel itself. By adopting the Posix programming interfaces, that job should be made easier.
Although LynuxWorks is more sceptical of the general applicability of high-level EAL-certified operating systems, the company is planning for a similar migration of function. "This ability to take legacy code that wasn't in a security environment and migrate it into secure bits of hardware will be a key factor," says Day.
In the meantime, Jones and colleagues are pressing ahead with the single-machine plan. "I asked the question: can we do this on a LAN? Green Hills came back and laid out a proposal. We are sponsoring a joint technology demonstration called OB1," he says, adding that the name "gets everybody's attention".
"What's the payoff? Instead of having to load ships with four lots of equipment and wire, you just ship one. You save on cost and ship stuff quicker. Central Command has about 35,000 users spread over five networks. We can avoid a cost of $200m using this solution, so we are very interested in this going to fruition."
A lot of other users in the military and elsewhere will look at the programme and ask the same question: can you help us, OB1?