Why and How to Improve Requirements
Author: Bernd Holzmüller (ITK Engineering), Wolfgang Meincke | Date: January 05, 2021
“How do you manage your requirements?”
We ask this question quite often in customer meetings. The people we meet are usually developers and testers in the automotive industry. In most cases the answers range between “We do not have any requirements. The model represents them.” and “We use a requirements management system; however, the requirements are not very detailed.” Of course, there are some laudable exceptions but on average the quality of the requirements is not that good.
There are several reasons for the lack of quality and my list is not complete, for sure. The main reason is – from our point of view – that requirements engineering is a (very) underestimated task. The required skills and experience of a requirements engineer and the time it takes to write good requirements are underestimated. This also comes from the fact that the consequences of poor requirements for the duration and costs of a project are underestimated as well.
Another observation is that quite often there is no dedicated requirements engineer at the software level. Either the project manager or the developers write the requirements. This either leads to very high-level requirements or to implementation-dependent requirements. If the requirements are too high-level, missing details give room for interpretation for development and testing. If they are implementation-dependent, the requirement describes how to do it and not what the expected functional behavior is. In addition, (project) managers love to have fast results to be able to show that the project is on track. Unfortunately, this results in a very fast start of the implementation before the requirements are clear. Model-based development makes the situation even worse, and it is common to document the requirements once the implementation is already done.
The major quality issues resulting from the problems just described are:
- System/Software boundaries are not clearly defined
- Functional requirements do not provide a black-box view on the system
- Used terms are not defined or inconsistently used
- Requirements semantics is ambiguous
- Preconditions are incomplete or at least not locally apparent
- Effects are incompletely specified
If a project is based on requirements comprising any of these issues, many bugs will be found in a late stage of the project. There is no doubt that fixing bugs late in a project has a tremendous effect on the release date and the costs. However, this is not surprising, because such requirements give too much room for interpretation.
But let’s stop complaining and let’s have a look at what good requirements should be.
What are good requirements?
If you search the web for “good requirements” you will get several bulleted lists that more or less cover the same properties of a good requirement:
- Clear (concise, atomic, simple, precise)
- Feasible (realistic and possible)
- Verifiable (Testable)
At first glance, it seems to be clear what good requirements should look like. However, this turns into a challenge when these properties are applied in practice. For example, if there are many preconditions it can become difficult to be both, precise and understandable. Or, if you slice requirements into atomic size it might be a problem to remain understandable and still achieve consistency. Atomic requirements that are fully self-sufficient typically have strong interactions with each other concerning shared conditions and effects, and thus might be hard(er) to understand collectively and being made consistent and complete.
These are just a few examples of the challenges entailed by requirements engineering. This even increases if the requirements engineering tasks are split between different engineers. The resulting complexity of keeping all these aspects in balance is the reason why requirements engineering takes time and needs educated and experienced engineers.
When looking at (model-based) development and testing, it is common to use processes, methods, and tools to ensure that several quality goals are reached. There are coding and modelling guidelines or test metrics that are measured. Why not apply something similar for requirements engineering?
This topic is not new, but becomes more and more important due to an increasing number of safety critical features in software as well as a growing amount of software running on controllers in vehicles. It is already quite common to use language patterns for requirements to face the named issues. However, it is taken to a whole new level when it comes to formalization.
Formalization means that the requirement is expressed in a formal language. A formal language
- provides an unambiguous and pre-defined structure the requirement has to fit in
- uniquely defines both, syntax and semantics
- reassumes clear system boundaries
- usually makes requirements machine-readable
Several different formal languages have been developed since the late 1970s, e.g. LTL (Linear Temporal Logic) and CTL (Computational Tree Logic). However, these languages are not really suitable for the daily work of an engineer since the corresponding mathematical formulas need highly experienced people to understand them correctly. Therefore, most current approaches either use fixed text patterns or a graphical representation that will be translated to LTL or CTL in the background. This allows the requirements to be formal on the one side and still keep it on a level that a requirements engineer can handle within his daily work.
The formalization addresses all major issues named before. For this blog article, we will consider a graphical representation. The basic concept is that there is always an observable triggering event (trigger) and a reaction (action).
Each of these elements has at least one condition but can be enhanced by additional elements like start- and end-events, exit-conditions and timings.
To demonstrate how this can improve the quality of a requirement, let’s assume a simple controller. The interface of that controller is an on/off button and two outputs (status and light). One of the requirements for this controller (that shows some of the weaknesses mentioned before) is:
The controller can be turned on.
In this case it is quite obvious that the requirement does neither talk about a specific interface (system boundaries), any possible preconditions, nor have a clear syntax; just to name three weaknesses. If we try to formalize the requirement the following questions arise:
- How is the controller turned on?
--> By pressing the On/Off button
- How can it be observed that the controller is turned on?
--> The status is set to on
Let’s refine the requirement:
If the On/Off button is pressed, the controller turns on.
This is how the requirement looks as a formal requirement:
A new question that might arise is if there is a timing between Trigger and Action. Because for now Trigger and Action happen at the same time. And another question that comes in mind is, if a precondition of this requirement should be that the Status is off when the button is pressed. When we assume that this is the case, timing becomes mandatory because the Status cannot be on and off at the same time. If we consider these aspects, the requirement should be enhanced to:
If the On/Off button is pressed and the controller is off, the controller turns on within 10ms.
The formal requirement will look like this:
Even though, this was just a small example, and it seemed to be clear what the original requirement wants to express, there was much room left for interpretation and the timing was missing. Just because it seems to be clear to the one person that writes the requirement this does not mean that it is clear to another person in the project.
This short article pointed out some well known issues with requirements quality in many projects and that there are appropriate methods and tools available to address them. We have seen that quite often there is room for improvement and that it needs a decision from the (project) management to spend more time doing requirements engineering before starting with the implementation – as well as using the right people and methods. It’s a well-researched and documented fact that the later a bug is found in a project, the more expensive it is to fix.
Jack Ganssle hit the nail on the head when he said, “If the second half of the project is ‘debugging’ that must mean the first half is ‘bugging’.”
In contrast to bugs that happen during development, weak requirements lead to systematic interpretation issues, inconsistencies, and time-consuming follow-up discussions on what is really meant by a certain requirement. This means that the requirements quality has a much stronger impact on the product quality than implementation bugs usually have.
One more thing
Transforming requirements into a formal representation does not only increase the requirements quality, but it also enables additional verification possibilities, e.g.
- A Formal Test verifies execution records coming from different test stages or field tests against the formalized requirements. This shows if all execution records really cover all requirements, and it might reveal if an execution record violates a requirement.
- Formal Verification can be used to execute a mathematical proof using Model Checking Technology on a formal requirement to verify if a requirement can ever be violated on software level.
Bernd Holzmüller studied computer science at Stuttgart University and subsequently worked as a research assistant at the department of programming languages and compiler construction. His main interests are practical use of formal methods and notations, process modeling and improvement, and quality assurance. He has over 20 years of experience in software engineering and project management in various industries like transportation, aerospace & defense, automation and automotive. After joining ITK Engineering in 2012, he was involved in defining and establishing a quality management system for software development for an Automotive OEM. Subsequently, he worked as a Manager R&D as a consultant for requirements engineering, development and testing processes. Currently, he is responsible for V&V business development at ITK Engineering.
Wolfgang Meincke studied Computer Science at the University Ravensburg-Weingarten where he graduated in 2006. He then worked at EWE TEL GmbH where he was responsible for requirements engineering and project management for software development projects as well as agile software development processes. In 2014 he joined BTC Embedded Systems as senior pilot engineer to support the customers to integrate the tools into their development processes as well as giving support and training on test methodologies regarding the ISO 26262 standard. One of his main areas of interest is the formalization of safety requirements and their verification based on formal testing and model-checking technologies for unit test, integration test and HIL real-time-testing.