Showing posts with label assurance argument. Show all posts
Showing posts with label assurance argument. Show all posts
assurance argument
A More Precise Definition for ANSI/UL 4600 Safety Performance Indicators (SPIs)
Safety Performance Indicators (SPIs) are defined by chapter 16 of ANSI/UL 4600 in the context of autonomous vehicles as performance metrics ...
Positive Trust Balance for Self Driving Car Deployment
By Philip Koopman and Michael Wagner, Edge Case Research Self-driving cars promise improved road safety. But every publicized incident chips...
Other Autonomous Vehicle Safety Argument Observations
Other AV Safety Issues: We've seen some teams get it right. And some get it wrong. Don't make these mistakes if you're trying to...
Human Test Scenario Bias in Autonomous Vehicle Validation
Human Test Scenario Bias: Machine Learning perceives the world differently than you do. That means your intuition is not necessarily a good ...
Safety Argument Consideration for Public Road Testing of Autonomous Vehicles
Beth Osyk and I are presenting our paper at SAE WCX today on how to argue sufficient road test safety for self-driving car technology. See b...
Nondeterministic Behavior and Legibility in Autonomous Vehicle Validation
Nondeterministic Behavior and Legibility: How do you know your autonomous vehicle passed the test for the right reason? What if it just got ...
Missing Rare Events in Autonomous Vehicle Simulation
Missing Rare Events in Simulation: A highly accurate simulation and system model doesn't solve the problem of what scenarios to simulate...
Dealing with Edge Cases in Autonomous Vehicle Validation
Dealing with Edge Cases: Some failures are neither random nor independent. Moreover, safety is typically more about dealing with unusual cas...
The Fly-Fix-Fly Antipattern for Autonomous Vehicle Validation
Fly-Fix-Fly Antipattern: Brute force mileage accumulation to fix all the problems you see on the road won't get you all the way to being...
The Discounted Failure Pitfall for autonomous system safety
The Discounted Failure Pitfall: Arguing that something is safe because it has never failed before doesn't work if you keep ignoring its ...
The “Small” Change Fallacy for autonomous system safety
The “small” change fallacy: In software, even a single character source code can cause catastrophic failure. With the possible exception of...
Assurance Pitfalls When Using COTS Components
Assurance Pitfalls When Using COTS Components: Using a name-brand, familiar component doesn't automatically ensure safety. It is common ...
Proven In Use: The Violated Assumptions Pitfall for Autonomous Vehicle Validation
The Violated Assumptions Pitfall: Arguing something is Proven In Use must address whether the new use has sufficiently similar assumptions a...
Pitfall: Arguing via Compliance with an Inappropriate Safety Standard
For a standard to help ensure you are actually safe, not only do you have to actually follow it, but it must be a suitable standard in domai...
The Implicit Controllability Pitfall for autonomous system safety
The Implicit Controllability Pitfall: A safety case must account for not only failures within the autonomy system, but also failures within ...
Edge Cases and Autonomous Vehicle Safety -- SSS 2019 Keynote
Here is my keynote talk for SSS 2019 in Bristol UK. Edge Cases and Autonomous Vehicle Safety Making self-driving cars safe will require a co...
Command Override Anti-Pattern for autonomous system safety
Command Override Anti-Pattern: Don't let a non-critical "Doer" over-ride the safety critical "Checker." If you do t...
Credible Autonomy Safety Argumentation Paper
Here is our paper on pitfalls in safety argumentation for autonomous systems for SSS 2019. My keynote talk will mostly be about perception ...
A Safe Way to Apply FMVSS Principles to Self-Driving Cars
As the self-driving car industry works to create safer vehicles, it is facing a significant regulatory challenge. Complying with existing F...
Subscribe to:
Posts (Atom)