The “small” change fallacy:
In software, even a single character source code can cause catastrophic failure. With the possible exception of a very rigorous change analysis process, there is no such thing as a "small" change to software.
Another threat to validity for a proven in use strategy is permitting “small” changes without re-validation (e.g. as implicitly permitted by (NHTSA 2016), which requires an updated safety assessment for significant changes). Change analysis can be difficult in general for software, and ineffective for software that has high coupling between modules and resultant complex inter-dependencies.
Because even one line of bad code can result in a catastrophic system failure, it is highly questionable to argue that a change is “small” because it only affects a small fraction of the source code base. Rigorous analysis should be performed on the change and its possible effects, which generally requires analysis of design artifacts beyond just source code.
A variant of this pitfall is arguing that a particular type of technology is generally “well understood” and implying based upon this that no special attention is required to ensure safety. There is no reason to expect that a completely new implementation – potentially by a different development team with a different engineering process – has sufficient safety integrity simply because it is not a novel function to the application domain.
(This is an excerpt of our SSS 2019 paper: Koopman, P., Kane, A. & Black, J., "Credible Autonomy Safety Argumentation," Safety-Critical Systems Symposium, Bristol UK, Feb. 2019. Read the full text here)
In software, even a single character source code can cause catastrophic failure. With the possible exception of a very rigorous change analysis process, there is no such thing as a "small" change to software.
Headline: July 22, 1969: Mariner 1 Done In By A Typo
Another threat to validity for a proven in use strategy is permitting “small” changes without re-validation (e.g. as implicitly permitted by (NHTSA 2016), which requires an updated safety assessment for significant changes). Change analysis can be difficult in general for software, and ineffective for software that has high coupling between modules and resultant complex inter-dependencies.
Because even one line of bad code can result in a catastrophic system failure, it is highly questionable to argue that a change is “small” because it only affects a small fraction of the source code base. Rigorous analysis should be performed on the change and its possible effects, which generally requires analysis of design artifacts beyond just source code.
A variant of this pitfall is arguing that a particular type of technology is generally “well understood” and implying based upon this that no special attention is required to ensure safety. There is no reason to expect that a completely new implementation – potentially by a different development team with a different engineering process – has sufficient safety integrity simply because it is not a novel function to the application domain.
(This is an excerpt of our SSS 2019 paper: Koopman, P., Kane, A. & Black, J., "Credible Autonomy Safety Argumentation," Safety-Critical Systems Symposium, Bristol UK, Feb. 2019. Read the full text here)
- NHTSA (2016) Federal Automated Vehicles Policy, National Highway Transporta-tion System Administration, U.S. Department of Transportation, September 2016.