Quote from: _MT_ on September 09, 2022, 15:34:39The problem is that it is just too complicated. It might work fine and then you strike a strange pose, get caught from an unusual angle or wear clothes with a weird pattern and suddenly the computer won't recognize you. The possibilities are so numerous that it is impossible to test all of them. You can't test these systems with simple dummies because variety is the big worry. You can't just run the same protocol over and over again, you have to try to fool it. To see how resilient it is, how realistic it is for it to make a mistake. The more sophisticated the system, the more sophisticated the testing. Otherwise, you risk creating a system that works great in a test environment, but less well in the real world because it wasn't a good-enough representation to expose flaws. Even humans can be fooled and our image processing is very sophisticated. They're definitely doing interesting things but if I can have an active ranging technology, why wouldn't I? (For me, the only answer is pollution.) It's our limitation that we as humans don't have it, not our advantage. We have evolved in an environment where good vision worked really well (we don't even have night vision). When it comes to things like radar, it's all about reflectiveness and size. It's much easier to ensure that any relevant obstacle will produce a strong-enough signal. It doesn't have to know what it is for it to know that it is there.
Completely agree on your initial point - the strongest feature of the system is it's capability to learn (something which is incredibly profound, in the history of human invention).
Therefore the best way to leverage that feature is by testing it in as many ways as possible, in order to improve the system as much as possible.
With respect to the use of radar, I'm not so sure any more as to it's requirement, or even benefit, for self-driving.
We have a tendency to think that more information implies better decisions - while this tends to hold in theory, in reality it's frequently the opposite.
The classic example is when a person/system is presented multiple facts/values which contradict one another. This creates a serious problem, in that it's impossible to know which pieces of information to rely on beforehand
and how to evaluate the same sources of information in the future.
Having to have a process for mediating between two different sources of information is a complicated problem - sometimes the information will be complementary, thereby improving the decision making. Other times it will be contradictory - so you have to have a way of picking which one, or throw away the information. But the worst scenario is the "2+2=5" one, where an inference is made based on partial information from one source which is "filled in" by partial information from the other - resulting in a bad decision, that might not have otherwise have been considered.
I suspect this is the reason Musk decided to get rid of the radar - because it was making the learning far more complicated due to those contradictions - it's essentially an intractable problem for a learning system that isn't able to reason.
I suppose the question at the end of the day is if you think the system will learn to be safer with just a single set of sensors that it will always rely upon by design, or two different sensors which it will always have to mediate between.
I have no idea myself!