We were recently asked the following question from one of our clients:
“Our company is updating/replacing a legacy system. The current system includes an embedded processor that was already built long ago and was developed without any requirements (at least formally defined and managed). Given that this legacy processor has a nice set of features, management wants to use it in the new system. Moreover, our System Engineers will have to link (trace) top level requirements to the processor requirements in order to guarantee traceability and coverage.
My issue is this – I strive to have a set of requirements that are ‘implementation free’. However, I don’t know how to achieve this goal given that I am unsure how to write functional and performance requirements for the legacy processor without exposing the actual implementation details of the legacy processor.
For example, the processor has a 64KB program memory, would that be a ‘requirement’? I have the impression that it is more of an outcome of a design choice that needs to meet other requirements (like application size, technology constraints, etc.).
Could some give me a feedback on this? Pointers, suggestions?”
Implementation is a common issue. We have addressed this topic in several blogs: “Avoiding Implementation” and “How to Handle Implementation in Customer Requirements”.
Now to your issue/question which is very interesting at many levels.
The short answer? The constraint to use the legacy processor is implementation. We advocate that implementation be avoided in the “design-to” set of requirements and leave it up to the design team to use their expertise to define a design that will best meet the stakeholder needs and expectations as communicated via the “design-to” requirement set.
That said, as a solution to your issue, you could reverse engineer the “design-to” requirements to be consistent with the legacy processor. These reverse engineered requirements would then be parents to which the design team can trace their processor set of “build-to” requirements. However, for the project to be successful, you need to seriously evaluate whether or not the legacy processor is the best choice for this project. As we all are aware, computer technology has advanced exponentially over the years and using a legacy processor may not be the best choice.
I tell my students: “Writing requirements is not an exercise in writing, but an exercise in engineering. Every written requirement communicates an engineering decision or choice that is being made concerning the desired functionality, performance, quality, adherence to standards and regulations, etc.” You could write the “design-to” requirements to be consistent with the legacy processor’s features, functionally, and performance. However, beware that while you may meet the management constraint to use the legacy processor, you may fail to meet the needs and expectations of the other stakeholders in the current and future operating environment.
The long answer? We like to look at requirements from several perspectives. From a technical requirements perspective, we like to make a distinction between “design-to” and “build-to” requirements.
“Design-to” requirements focus on the “what” not the “how”. They are a result of a transformation of stakeholder needs and expectations into a language that clearly communicates the stakeholder needs and expectations to the design team. This set of requirements, in general, should not contain implementation – where I am defining implementation here as a choice made by the design team to best meet that design-to requirement that drove that choice. From a traceability perspective, each of the design-to requirements should trace to one or more stakeholder needs and expectations defined during scope definition. In the design-to set of requirements, rather than having a requirement on memory size, the requirement should be on the requirements that communicate the capabilities needed (functional, performance, quality, security, interfaces, communication protocols, etc.) that drive memory size and processor performance.
“Build-to” requirements reflect the “how” design choices and communicates these to those who are actually building or coding the system. These requirements reflect “how” the design-to “what” requirements will be met. It is this set of requirements that would normally specify the processor to be used, and based on that processor, the amount of programmable memory, communications, speed, power, thermal, etc. From a traceability perspective, each of the design choices should be able to be traced to one or more requirements in the “design-to” set of requirements.
Now let’s look at the overall process of developing a product.
The client said their company is updating/replacing a legacy system. This implies that someone (internal or external) has a problem or opportunity that the current system can’t address and the client’s company is supplying a solution – the new system.
The approach we advocate in our classes is to define the scope of the project first. You can read about this in our blog: “Baseline Your Scope Before Writing Requirements”. An outcome of these activities will be the stakeholders needs and expectations for system capability including functionality, performance, security, etc. These needs and expectations are what are being transformed into the “design-to” set of requirements.
These requirements are then allocated to parts of the system architecture. Those that will be implemented via a processor will be allocated to that processor. The design team responsible for implementing these requirements will select their choice for a processor that best meets the design-to set of requirements and include their choice in the “build-to” set of requirements. The requirement for using the selected processor can then be traced to the “design-to” requirements that were allocated to the processor.
Now the fun begins. There are two scenarios:
In the first scenario, design-to requirements are written to be consistent with the legacy processor’s features, functionally, and performance. In doing this, the design team should have no problem in meeting the constraint to using the legacy processor and can trace the build-to requirement to use that processor back to the “design-to” requirements. In reality, they have developed the build-to processor requirement first and then the parent design-to requirements have been reverse engineered to be consistent with the legacy processor. However, like I said before, the resulting requirements may not be consistent with the needs and expectations of the stakeholders for the new system in the current and future operating environment.
What happens if the implementation of the legacy processor requirement results in a system that does not meet the stakeholder needs and expectations defined during the scope definition phase?
For the first scenario, because both sets of requirements were written assuming the legacy processor, you would pass system verification because the resulting system would meet both the design-to and build-to sets of requirements. However, if this system does not meet the stakeholder needs and expectations and does not meet its intended purpose in its operational environment, you will fail system validation.
For the second scenario, the design-to set of requirements are written based on the knowledge you gained from the scope definition phase. These requirements did not assume a specific processor, rather these requirements clearly reflect the needs and expectations of the stakeholders from an overall system capability, functionality, performance, quality, security, etc. perspective.
For this second scenario, it is likely the legacy processor is not able to support this set of requirements. There are more modern processors that reflect the latest technology in terms of functionality, performance, quality, and security that would meet the design-to set of requirements.
Now management has to make a decision. Use the legacy processor anyway? This would mean that the new system would fail system verification, in that the resulting system does not meet the “design-to” set of requirements. The new system would also fail system validation in that the resulting system would not meet the stakeholder needs and expectations – the system would not meet its intended purpose in its operational environment. [For more on verification and validation, see my two-part blog “What is the difference between verification and validation?”]
To pass both system verification and system validation, management would need to remove the constraint to use the legacy processor and let the design team select a processor that would best meet the design-to set of requirements that reflect the stakeholder needs and expectations.
A parting thought
Management may feel that they will save time and money if they are able to use a legacy system. However, in many cases, this is a myth. The reality is that the legacy system was designed for a specific purpose and operational environment. It is very risky to assume that the legacy system will perform as intended and needed when used for a different purpose in a different operational environment.
Thus, the second scenario is the best approach to take. If, by chance, the legacy system will meet the design-to requirements – great! However, if it fails to do so, management needs to accept that fact and allow the design team to select components that will me the design-to requirements.
Comments to this blog are welcome.
If you have any other topics you would like addressed in our blog, feel free to let us know via our “Ask the Experts” page and we will do our best to provide a timely response.Tags: build-to requirements, design-to requirements, implementation, legacy, legacy parts, legacy systems, system validation, system verification
By Pierre-Marc Guilbault July 6, 2019 - 5:06 am
Excellent article Lou. This article also closely relate to allocating parent requirements to parts and how traceability from the child requirements to the parent requirements is accomplished.
Instead of generating a system engineering analysis to justify the existence of new derived processor requirements (for a new optimized processor) that trace to the allocated system requirements, a system engineering analysis is required to justify why and how the existing requirements of the legacy processor can be used to implement the allocated system requirements, including traceability links from the legacy requirements to the system requirements. In the second case, the output of the system engineering analysis should contain potential performance limitations and margins associated in using this legacy processor as part of the new system being developed.
If the potential performance limitations and margins are unacceptable, either :
1) the system requirements is modified to accommodate the limitations and the stakeholder expectations for the system performance are revised (1st scenario above); or
2) the legacy processor is replaced with a new processor with optimized performance and the management expectation to use legacy processor is revised (2nd scenario above).
In the grand scheme of things, what is the point of using legacy components to “save time and money” if the resulting system does not exhibit the minimum performance that satisfies the stakeholders needs? In this case, developing this new system is pointless since it cannot be sold at all. This would defeat the initial management premise of “saving time and money”.
Thank you Lou for the great articles.
By Lou Wheatcraft July 13, 2019 - 12:19 pm
Pierre-Marc – thanks for the kind words and insight. I have found that there really is no such thing a legacy. I say this in that performance of a new or updated system is based on the needs for that system and the availability of the technologies available to meet those needs. On the surface it would seem there may be an existing component used previously that may be able to meet those needs, however that capability is dependent on two factors? intended use AND operational environment. If either or both are not the same for how the existing component was used, then a new component may need to be used. This new component may have different support requirements and interfaces which need to be addressed (operational environment) as well as performance that will need to be addressed in the context of the system as a whole. For more about this view, see my blogs concerning technology readiness levels and concept maturity levels.