Software in bits

Software componentry promises process efficiencies when it comes to coding up enterprise applications, but does its complexity militate against the theoretical gains? E&T investigates.

The notion that software can - and should - be built from reusable prefabricated components dates back to the nascent days of enterprise computing, when the growing importance of (and dependence on) software had led to duplication of programming effort, and a proliferation of unwieldy, poorly-documented code. With main memory on large mainframes then just 64 KB (if you were lucky), programs had to be broken up into smaller modules just to accommodate the code and immediate data.

These were not generally reusable components, but did engender the skills required to write more efficient, properly documented software in manageable chunks. At the same time, reusable programs in the FORTRAN language - called subroutines - were written to execute a range of scientific and mathematical algorithms, such as matrix inversion, along with many more obscure functions. The era of reusable components had begun.

Seeking reusability

Over time the scope of software componentry extended into the business arena, and beyond single languages or proprietary operating systems, to become more portable across different computer brands. Yet the goal of complete reusability remained elusive, and is now regarded by some of the field's adepts as a futile chase.

"'Total reusability' is nonsense," says Clemens Szyperski, a software architect at Microsoft in Redmond specialising in componentry research. He argues that every non-trivial task is partly implicit, involving some degree of context that requires additional processing, and cannot be done in isolation. Only certain well-defined tasks within agreed contexts can be totally reused, like those mathematical algorithms in FORTRAN subroutines.

Szyperski does admit that progress has been made in delivering functional components that underpin the fabric of distributed computing and Web services, but as the sum of many small advances rather than through any substantial breakthrough: "Progress has been slow because people grabbed the component idea as the next silver bullet, of which unfortunately there are decidedly few."

He cites the arrival of plug-in architectures with OLE/ActiveX from Microsoft leading to technologies like OSGi (open services gateway initiative), and MEF (managed extensibility framework) as products of this steady progress.

Diffusion confusion

If the computing industry came to a standstill, then we would probably need look no further than these environments, and regard the software component problem as settled, even if not finally solved to complete satisfaction. But the scope of IT has widened dramatically over the last few years, with the eruption of mobile computing involving diverse portable devices only intermittently connected to network services, and also embedded computing in anything from pacemakers to traffic lights. These have brought new challenges for software components, as well as accentuating traditional dilemmas - such as how coarse- or fine-grained to make them.

This diffusion of computing has made it even harder to give components the portability they need, according to Davy Preuveneers, researcher in the embedded systems and components group at the Catholic University of Leuven in Belgium. "One of the biggest challenges right now is the huge heterogeneity in hardware that is on the market," he believes.

"Developing a single application that runs on all devices is not straightforward [even with standardised runtime environments like J2ME]." J2ME is the Java Platform Micro Edition, a subset of the full set of Java APIs for developing software to run on small resource-constrained devices.

Mobile phone operating systems have evolved rapidly to insulate applications from the underlying hardware, in order to provide some portability and component reusability, as Preuveneers agrees. "Whereas previously we had to talk directly to the hardware on the device to get the information we needed [for example collect the available resources on the device, find out to which cell tower the mobile phone is connected], and had to deal with the fact that each device was different, we can now use more and more standardised programming interfaces [in the form of libraries or SDKs] to collect such information." 

This enables software to be ported from one device to another at the implementation stage; but a fluid and loosely-coupled mobile computing environment supporting real-time interoperability between applications and services also requires run-time portability, which Preuveneers reckons will come in the next few years. "My take is that the same thing will happen with the existing runtime environments, but with one step at a time, and driven by the kind of applications that are popular at that moment."

The run-time environment will provide the hooks and interfaces components need to execute, whatever device they happen to be on. This brings additional challenges, since devices will vary in availability of resources such as memory, CPU, network bandwidth, and battery power. Components will need the ability to operate in isolation for periods to save bandwidth, and switch to passive mode to reduce CPU and power consumption.

Size and location

In order to deliver worthwhile mobile applications exploiting location and 'presence', components must do more than just interact and conserve resources. They also have to become aware of where they are, of their context. This is a higher level problem, with mobility creating new opportunities, in particular through making location-based information available, which in turn adds an extra dimension of complexity.

The challenge is to develop a common understanding of information about the user's location and presence, with the ability to prioritise different sources of data, and learn from past contexts to analyse the present requirements of the service or application and possibly predict the future.

Amid all these new challenges brought by mobility, there is one old dilemma that will never go away - how big your component should be. The trite answer is that the component should be just the right size for the particular application environment.

Microsoft's Szyperski paraphrases Einstein by insisting enigmatically that, while a component should be as small as it can be - but no smaller, it should also be as large as it can be - but no larger.

The point is that if a component is too coarse-grained its reusability is low because it is likely to be coding a number of processes, some of which will be redundant in a given environment or application. If components are too fine-grained, many will have to co-operate within a larger task, bringing too many dependencies or correlations (see The Correlation Challenge, right) between them. These dependencies also compromise reusability; so there is an optimum component size here.

Secure performance

Unfortunately, reusability is not the only consideration for component size and design, performance and security being two other important ones.

Performance depends on the latency of the interactions between components, on how long they take to exchange relevant messages, according to Dave Booz, senior technical staff member for Websphere Service Component Architecture at IBM.

On the other hand, while tightly-coupled components may execute faster, this makes them less able to evolve independently, as changes to one have an impact on others. This, in turn, impairs reusability and portability again.

Safety may also enter the equation. Some real-time applications, such as flight-control systems, require software that meets much more exacting standards for reliability - often with built-in code redundancy - to avoid single points of failure within critical execution paths. Such systems are less likely to involve reusable components, according to Scott Niemann, senior manager for systems modelling & communications at IBM's Rational Software Delivery Platform division. IBM Rational software helps organisations automate, integrate, and govern the core business process of software and systems delivery.

Security is also a consideration in component sizing and design, although this has only recently been recognised as an important factor in the overall design. It is often impossible to fully assess the reliability of a large application or software system, such as a relational database or operating system; but a small component of the system is much easier to assess, with reduced probability of missing vulnerabilities.

Therefore, if critical parts of a system can be incorporated within just a few components, each of which can be validated rigorously, overall security can be improved - indeed, the same is likely to apply to safety within avionics systems, for example.

Cisco Systems has realised this and is currently engaged in a project 'Predicting Security Vulnerabilities in Software Components'. Work is still in progress, but the aim is to develop new methods for assessing the reliability of components and characterise their vulnerabilities.

With all these variables to consider, though, there should be sympathy for the IT developer, who has to embrace all this additional complexity. For this reason, IBM believes that component development should be jacked-up to a higher level of abstraction, allowing developers to define the services provided by components in an intuitive graphical manner, without having to worry about dependencies or granularity.

"IBM has technology that takes these graphical UML (Unified Modelling Language - a graphical software design language) component models, and automatically constructs the components for deployment using Java and .NET or traditional programming languages such as C and C++," says IBM's Scott Niemann.

"Depending on the type of application, embedded software or IT, these components can be created with additional timing constraints to fit into a breadth of application types spanning various market needs."

Clearly this is work in progress, and it is unlikely that software generated from UML would yield the best optimised components at present, given all the variables involved; but it is a way forward.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close