Contents
Interview
7i program and SWATS campaign
Guest contributors
Hot topics
Business cases
Experts voice
Quick Poll
At a glance
Events
PDF version
 

Subscribe to
Bull Direct:


 

Archives
February 2007
Experts voice

Measuring achievement against targets
Interview with Roger Parrié, Head of NICT (New Information and Communications Technology) projects at Bull
His career in information system architecture has kept Roger up to date on the most recent developments in state-of-the-art technologies (Corba, J2EE), and more recently on project management. Bolstered by this dual experience, he was one of those responsible at Bull for designing NovaForge.

"Today, software quality is synonymous with beautifully written code. Nevertheless, what is vital to project management is not so much a respect for certain programming standards than for the project requirements as stipulated by the project specification. Software quality measurement sets itself the task of evaluating - throughout the life of a project - the gap between the product under development and the target. The NovaForge development platform includes traceability and measurement tools for just this purpose, making it possible to translate high-level requirements into unitary testing routines. This means it is possible to monitor the real state of play on any project on a daily basis. This in turn enables users to monitor the actual state of progress on the project from day to day, ensuring genuinely effective management."

One of the elements offered as part of the NovaForge development factory is software quality management. How do you define this term?
The main problem related to development work on software products is how to meet the specification in a given timescale and within a given budget. The Contracting authority sets the levels of performance and functionality required on the basis of a certain number of criteria: validity (how closely the software fulfils the functions defined by the specification and design brief); reliability or robustness (the software’s capacity to function under extreme or abnormal conditions); extensibility (how easily it lends itself to modification or extension of its functionality); re-usability (whether it can be effectively re-used, as a whole or in part, in other applications); compatibility (how easily it interoperates with other software); effectiveness (how far it optimizes the use of hardware resources); portability (how easily it can be transferred to other hardware and software environments); verifiability (how easily testing procedures can be applied to it); integrity (the degree to which it can protect its own code and data against unauthorized access); and finally, ease of use (how long it takes users to familiarize themselves with it, the use and preparation of data, error interpretation and correction). All these aspects are taken into account to establish a target that the Prime contractor team is expected to attain. The real problem lies in knowing, even as the project unfolds, how far away one is from that target, and that is precisely the aim of the software quality measurement approach we have defined. It involves establishing a yardstick that can be overlaid on the matrix of requirements for the purpose of measuring how far the development work that has been carried out goes towards meeting the set objectives.

How does this approach differ from the other ‘software quality’ approaches that we usually hear about?
Existing tools such as Cast, and before Logiscope, measure the intrinsic quality of the code produced: its complexity, the number of classes, attributes and methods by class, and the scope of the methodologies used. These elegant and sometimes relevant measures are nonetheless restricted to very low-level aspects that only inform to a limited extent on how suitable the software really is for the task it has been designed to do. They are evaluating the beauty of the style, but not the quality of the text! For the Prime contractor, however, knowing the average depth of an inheritance tree is much less interesting than knowing, for example, whether the object-relational mapping will operate correctly. Software quality measurement such as we have defined in NovaForge answers this type of question over and above that of how well coding standards have been respected. These are not, moreover, an absolute, but must really be derived from the statement of requirements. It’s also worth adding that today more and more code is generated automatically from UML (Unified Modeling Language) diagrams using a MDA (Model Driven Architecture) type of approach that integrates the best programming practices. Under these conditions, there shouldn’t be any need to measure the intrinsic quality of the code.

Would these measurements of quality be of interest, for example, in the area of third-party application maintenance?
Yes, absolutely. Using this approach, we will be able to scope future needs for software maintenance, carry out impact analyses, establish the costs of a development program, and even evaluate an increase in complexity. Whilst the impact of all this is not negligible, it is still nowhere near as significant as the improvements we expect to achieve by measuring quality at a much earlier stage in the project. These measurements occur at a relatively late stage in the project – when development work is well advanced – at that point you are simply in reactive mode. With software quality measurement, it is a question of becoming proactive, and assuring quality right from the outset of the project in order to control its progress. This means effectively stepping up the idea of quality control from simply ‘monitoring to ‘actively managing’.

So how do you set about evaluating how far the functionality requirements for an application under development have been met?
We need to start by distinguishing between two types of requirement: those of the project itself, and those that relate to the software engineering. The latter are linked to good design and development practices for which experience has shown that it is enough to guarantee reliability and re-usability. So rather than measuring general quality indicators, we will be concentrating on evaluating the indicators that show how far standards and the main motifs of the architecture are being respected, which themselves capitalize on the know-how of the software industry. For example, we know that the optimum architecture for a J2EE application consists of five layers, from the presentation layer to the database call layer. We would therefore look at whether this model has been followed closely, and whether the rules of inter-layer calls have been respected. One of the aspects of software quality management will therefore be to measure how well these good design practices have been implemented. The generation of code by a factory like NovaForge is, in other respects, one way of responding to this issue. With predefined programming standards, we will indeed be able to draw on low-level measures to confirm that the more general requirements have been respected.

What about the project-specific requirements?
These are the requirements defined by the Contracting authority, and include: functionality, service quality and choice of technical platform. To measure the quality of the response, we have to use a raft of tests that are carried out to an increasingly fine level of detail. We start by establishing a repository that translates the requirements into service interface contracts on every layer of the software’s architecture. Compliance with functional requirements is verified progressively by testing these interface contracts. The metrics used become more immediate, and they produce a summary success rate for tests at every level of the architecture involved. Each test for a particular level checks that the requirements are being met, which itself contributes, to a greater or lesser extent, to a requirement at a higher level. In this way you can measure requirements more generally using elementary and, above all, operational metrics.

Are there tools available to carry out both these types of software quality measurements?
Certain Open Source products are available today enabling these tests to be designed at a reasonable cost: so-called xUnit testing frameworks, such as HTTPUnit, JUnit, and DBUnit. These were designed for carrying out unitary testing, and the principle governing their use is simple: integrate the test classes in the code in such a way as to be able to use them subsequently to plot requirements and so measure the degree of compliance. This does require a certain discipline when it comes to programming. But here again, one can take a measurement to determine whether the potential coverage of the test is appropriate.
In the same way, when it comes to compliance with standards in terms of architecture, design and programming, Open Source tools such as Checkstyle or PMD enable you to verify cost-effectively how well the code conforms to the state-of-the-art in general, as well as to the house standards that capitalize on best practices internally. This means it is possible to unify an organization’s applications, which results, for example, in productivity gains since the resources assigned to a new application find themselves on familiar territory, and so become immediately operational.

How do these measurement tools work to provide a global vision of the quality of the software?
NovaForge integrates the different measuring tools with the help of Maven or Ant, Open Source products that automate the code integration chain, execution of the testing procedures, and publishing of reports as part of a process of continuous integration. This means quality can be controlled throughout the project, making it much easier to manage the development work. Whilst we are still working on the development of full-scale control panels, the chain of control is already operational in a professional and highly structured way. This provides high-level indicators, from which you can zoom in on a precise part of the code.
Today, essentially it’s a question of loading: how many person days have already been expended? How many are left? From now on, in theory, you can understand objectively and on a daily basis the quality of the code, the progress that has been made since the previous day, and how far there is still to go before reaching the target, that is to say, meeting the customer’s needs. This is an ideal management tool for all project managers, but also an excellent means of communication between the Contracting authority and the Prime contractor. It also facilitates the running of the project, for example, when it comes to resource allocation, but also in the event of a change in requirements or priorities, since things will be that much easier to re-deploy. With an iterative methodology and these kinds of measurement tools, you really do have the ability to adapt to changes in scope or requirements.

What kind of impact could such tool has on the way a project is organized?
You have a statement of requirements, and then the capacity to plot these within the application and tools to allow them to be assessed and measured. Nevertheless, you still need to translate the functional requirements into technical imperatives. That is the role of the functional architect and the technical architect, who work closely with the project manager. The former establishes a repository of measurable requirements, and the second breaks these down into technical elements to be measured. This is only possible if one adopts an appropriate project methodology. That is why at Bull, and in NovaForge, we recommend an iterative development cycle: one that adapts the RUP (Rational Unified Process) by integrating the 2TUP approach (2 Tracks Unified Process). This Y-based model allows you to distinguish between functional and technical issues to unite them in one optimum solution, which is precisely what our software quality measurement initiative sets out to do.

Contact  |  Site map  |  Legal  |  Privacy