No car is self-driving.
A “self-driving” car is piloted by software, which is ultimately written by a person. You don't know who that person was; only that they were employed by a particular company.
They were probably sitting in an office somewhere in California when they wrote the code driving your car. Maybe it was 17:30 on a Friday and, despite caring sincerely about the work they were doing, they happened to be distracted by the prospect of going home. Maybe not. You don't know.
Do you trust that person with your life?
Well, the company hired them, so they can't be completely useless. You trust the company's recruitment procedures. …What are the company's recruitment procedures?
Anyway, presumably there are processes in place to review the code, and stop mistakes from making it into the final software. Presumably. You trust that there are, and that they work, and never fail.
Now imagine the company has made it illegal for you to see how the software works. Are you sure you trust this company with your life?
There should be a law saying that if a vehicle can be piloted by software, and it's capable of containing or hurting a human, then all installed software must be open source, and you must be able to prove that the source code corresponds to the software running in the car.
It has to be legally possible for the vehicle's owner (or prospective owner) to discover how their car might behave in a life-or-death situation, so they can decide whether they want to be responsible for the car's actions.
Logically, the manufacturer who wrote the software would be responsible, but they have no incentive to take responsibility for their cars' imperfections. Doesn't make money. Why admit your own flaws while your competitors keep schtum, look better, and rake it in? Any goodwill from better transparency will evaporate as soon as someone dies in an accident.
It's much safer to claim that the human pilot should have taken control at the critical moment. Capitalist governments won't argue with rich, profitably-taxable businesses.
Car makers will only be transparent about how their cars behave if they're obliged to by law.
Merely having access to the software's source code isn't enough. It must be legal to reuse the source code, for several reasons.
Morally, if Non-Specific Engines Ltd writes an algorithm that's better at saving lives than any other algorithm, shouldn't Acme Motors be obliged to used the safer algorithm in their cars, rather than forbidden?
Practically, you need software experts to audit the code. You want the code checked by an independent expert in the field of vehicle automation — not a business partner of the manufacturer — and that person will be a software developer.
If they use a similar concept in their own work later, Mom's Friendly Car Company could threaten to sue them, claiming they copied the code illegally. Software developers are rarely as rich as car companies; even the threat of a lawsuit would mean that in practice the code would go unchecked.
And again, morally, you can save lives here, by letting the developer reuse the good code.
Lastly, it needs to be possible to prove that the audited code is actually the code running in the car. You want an independent auditor to build the software for themself, in a development environment they trust, and get the exact same output as what's in the car. It must be possible to build the software reproducibly.
Otherwise checking the code is pointless — you still have to trust the car manufacturer, and you can't be sure the software's behaviour doesn't deviate in subtle ways in very specific situations. Maybe you don't care about any subtle differences, but maybe you do. The driver should at least be honest with you, and you can decide for yourself.
None of this will make sure a self-driving car is perfectly safe. All software has bugs. But at least you'll know the driver was acting in good faith.
Trade secrets and competitive advantage are not worth dying for.
…Or you could just trust the big friendly company… right?