Open source = open invitation?
On-going questions about the security of open source software may be unfounded, but users require better proof and assurance, finds E&T.
Predictions by market-watcher Gartner suggest that, by 2012, 80 per cent of commercial software will include elements of open source licensing; yet many IT professionals, particularly those employed in the business world, still worry that open source applications are too insecure to support mission-critical operations, and reckon that the risk is not worth the money they can save on the initial purchase price of the software itself.
But are open source applications really any more or less secure than closed source or licensed equivalents? And are hackers more or less likely to target them because of any real or perceived vulnerability in the way they are designed or implemented?
Emma McGrattan is senior vice-president of engineering at open source database company Ingres, previously a division of software giant Computer Associates (CA), which took the then controversial decision to take Ingres from closed source to open source in 2004.
At the time, some of Ingres customers had some experience of open source software, such as Linux or Apache, but for the majority it was their first foray into open source architecture.
McGrattan says that security was a grave concern for Ingres customers around the world, many of whom were worried that anybody could alter the underlying source code. "They were terrified, and we had to go out and educate customers around what it means to have an open source security product, particularly with respect to application security," she says.
McGrattan holds the view that companies like Microsoft are spreading fear, uncertainty, and doubt (FUD) because they have a vested interest in protecting revenue from closed source software. But also, smaller open source applications based on programming languages like PHP and SQL are causing problems because they are very easy to write and were originally created with little thought for security, she adds.
This is backed up by a study of open source security applications completed by applications security consultant Larry Suto in July 2008. Sponsored by security company Fortify Software, the report suggested that the most widely-used open source enterprise software packages are exposing users to significant business risk.
It examined 11 of the most common open source packages based on Java, including Derby, Geronimo, Hibernate, Hipergate, JBoss, JOnAS, OpenCMS, Resin, Struts, OFBiz and Tomcat, with multiple versions downloaded and scanned for vulnerabilities using Fortify's own software.
Though no comparisons to closed source equivalents were made, the survey concluded that many open source software development communities were yet to adopt secure development processes, address vulnerabilities or provide users with access to security expertise.
"Few open source projects provide documentation that covers the security implications and secure deployment of the software they develop, a dedicated email alias for users to report security vulnerabilities, or easy access to internal security experts to discuss security issues," Suto says.
Open source communications
Many other open source project websites do at least include issue tracking facilities, however, and email links encouraging users to report any bugs, including security vulnerabilities. And though internal security experts are few and far between in open source projects, the developers compiling the original code may be a better source of information on security issues anyway.
David Maxwell is open source strategist at software integrity specialist, Coverity, which regularly scans applications for vulnerabilities.
"It [security] varies on a project to project basis, both in the open and closed source application world," he says. "Even with single-user projects, it is not uncommon to have a secure email address or a Web page point-of-contact, or people to report bugs - the question is how much resource the developer has to deal with incoming comments."
Maxwell adds that it is hard to even identify a genuine security vulnerability in most cases. "When you break down common types of [software] defect, it is not black-and-white if a particular defect is an 'explodable' security vulnerability, or if it is not," he opines.
Amichai Shulman, chief technology officer at application data security firm Imperva, laments the fact that people often lump less-sophisticated public domain and open source applications in the same basket.
Shulman acknowledges, however, that the security of every open source software project has to be assessed on how it is run and managed - something that applies just as equally to closed source software. But he argues that this type of categorisation is not an accurate gauge of whether the application is secure anyway - the real differentiation comes from who owns the software in the first place, whether there is a clear and coherent security process for detecting and fixing vulnerabilities, and keeping users informed of progress.
"If you take a voluntary organisation made up of people doing after hours or community work, and they do not commit to patching cycles, that is always a problem; or sometimes they fix things fast, but do not peer-test within the community, which is worse," Shulman avers, "but if an open source project has a clear owner, and clear milestones for releasing security patches, then there is generally no security issue."
The same is true for closed source software companies, which also need to have definitive procedures for detecting vulnerabilities and paying attention to those reported by researchers, or setting dates to fix vulnerabilities, or communicating to information to users; and if they did not, those "would be less secure than open source", insists Shulman.
The most common misconception, says Shulman, is that closed source software is more secure because hackers are unable to get their hands on the source code, and open source is less secure because it is freely available for people to alter: "Some people have the perception that with closed source software, hackers do not see the vulnerabilities and do not know about them, and that makes it more secure, but I totally disagree with that," he argues.
"People tend to think 'if the source code is there and anybody can change it - how can I protect it?' - but people do not hack products through the source code," adds McGrattan, although she concedes that there is a clear distinction between small community projects and professionally managed products. If somebody wants to make a change to the Ingres source code, for example, they have to submit the changes for review which then go through clockwork testing.
"There is rigorous peer review and testing, and only after it has met our criteria do we allow the change," McGrattan explains. "You have to have a support team that knows how to handle security vulnerabilities and best practice on fixing them. Then make sure that all fixes are available across all platforms, so that customers are confident it is being handled."
Karl Wirth is director of security business at Red Hat, which distributes various open source operating systems and applications. He too believes the idea that attackers will be able to more easily find and exploit weaknesses in open source code because the code is made public is a flawed argument. In fact, Wirth says, because open source development and implementation is public it allows other developers to analyse it more thoroughly, find weaknesses and correct them more quickly, whereas closed source software vendors rely on smaller numbers of their own staff to do the same thing.
"Because everything is done in the open, the open source community must respond quickly to a found vulnerability. Closed source application vendors can wait months before they develop a fix for a vulnerability that is privately communicated to them," says Wirth.
Also, hackers tend to exploit faults in application design that allow them to take control of the software - faults they are just as likely to find in closed source software. "A serious attacker will be able to find the design and implementation flaws in closed as well as open source software," Wirth says. "Look at cryptographic algorithms - the most trusted are those that have been published openly, and have had the longest public scrutiny."
Imperva's Shulman concurs: "If the software is bad, it can take a good hacker as little as five minutes to find the weak spots and abuse them." The notion that hackers only bother to target well known applications that they know a lot of people tend to use tends to favour the use of open source applications that operate below most people's radar. Shulman disputes this, though, pointing out that, for the hacker, it is always a question of motivation, irrespective of the software.
"If you have a billion dollars' worth of goods on an online application, and you are using an esoteric piece of software that nobody else uses, and you believe that this fact alone will grant you more security, then you are utterly wrong," Shulman chides.
Any lingering doubts about the security of open source applications can be dispelled if open source developers follow a few basic rules.
First, they need to have point of contact so that users finding security issues can send the owner notification and get an immediate response that informs them the vulnerability has been noted and given an internal identification number. The developers also need to send out regular reports listing all vulnerabilities reported and outlining the action that will be taken to remedy them by a fixed point in time. This clearly creates a perception that this is a secure piece of software.
Google and OCERT
Google, for example, launched a security group for open source in May 2008. OCERT (open source computer emergency response team) is a volunteer workforce that aims to co-ordinate communication about security vulnerabilities among open source software developers and help in debugging and patching, much like US CERT and others.
Red Hat's Wirth also encourages additional software testing and quality assurance to eliminate bugs and potential vulnerabilities before code is shipped. As a commercial company that makes money from selling support and maintenance contracts for applications based on open source code, it's no surprise that Red Hat advocates building service levels around support, as well as a security response team responsible for monitoring all publicly and privately reported vulnerabilities - and providing rapid patches to address them.
Nor is it just a case of making individual open source applications secure, but also securing the applications that access them, says McGrattan: "We have to counter that by writing secure database applications, avoiding SQL injection techniques and so on, but these are things that you would have to think about if you were using Microsoft or anyone else's software as well."
But voicing doubts about the security of open source software may well be a smoke screen designed to hide the real concern for most customers. Opponents may mean something entirely different when they criticise open source software for being 'insecure' - rather, it is simply that they are loathe to use any piece of software for which they cannot hold another company responsible for if something goes wrong.
"The reluctance to use it is sometimes more related to the fact that big companies mix-up security with the ability to sue someone," says Shulman. Maxwell agrees, but argues that the idea that companies have legal recourse against a corporation is a myth in most cases.
"Lots of people hold that up as an argument against open source software, but the percentage of people who would actually follow through legal action successfully is very small," adds Maxwell.
Open source software is often seen as the preserve of geeks and students, inhabiting a world of which many IT professionals have no experience or desire to find out about: "Lots of corporates remain unfamiliar with open source and have grown up in an environment where everything is provided for them.
"The freedom they now have to go into the open source market and pick and choose components is often difficult to come to terms with."