Let me be more specific about what I meant by "linear thinking":
When most humans are presented with five or six data points that basically describe a straight line on a standard Cartesian graph, and are asked to predict where the next data point will be, they generally will place their estimate someplace along that line. If there are 6,000 data points, they will take that as evidence that there is an even stronger case for the next data point being approximately on that line. A similar effect is present at the beginning of an exponential curve with humans believing that because there have been relatively small changes in the past the next period will also have relatively small changes. As a result, when dealing with something that changes exponentially many humans seem to overestimate the rate of change initially and vastly underestimate it over the long term.
This effect is exhibited all sorts of places. In our domain, ask a sailor how much energy is in the wind at different wind speeds. You'll find that most will provide you with a linear estimate, not the square function which is actually the case. Ask a person how much energy has to be dissipated by the brakes in a automobile to reduce the speed from 100 MPH to 90 MPH vs 20 MPH to 10 MPH. Again, because they either don't know the E=1/2(MV^2) formula or don't know how to apply it. Humans most often think linearly. This is not a statement about linear vs non-linear equations, it's about a common error in the way folks think. It's also why Moore's Law (which Gordon Moore never called a "law" only an "observation") is so hard for many people. Gordon's observation was correct, that the advancements of the semiconductor technology were acting on a surface and thus would have an effect which was the square over some time period. His educated guess, based upon a few iterations which he had personally observed, was that the time period would be 18 month.
My point is that this sort error in thinking is endemic in humans, and that it is currently influencing many people's thinking about stuff like self-driving cars, the ability to control rockets and have them land upon re-entry, and numerous other sorts of technologies. I completely agree that there are activities which are NOT influenced by the underlying devices and their relatively relentless march along Moore's Law. (BTW, Moore's Law may have run its course) But, then we banging into
Metcalfe's Law (Metcalfe did call it a law in presentations I saw) which basically says that the utility of a network goes up as approximately the square of the number of nodes. There are a number of assumptions that Bob Metcalfe made which are dubious, but there is something approximating Bob's observation going on.
Having found this cognitive error in numerous technically competent folks, I look for it often. While it's probably NOT the case that all of whatever one defines as "AI" is going to go through some massive discontinuous change, it is certainly true that numerous pieces of the field have already done so.
Of course, to someone who uses "AI" daily, or builds semiconductor parts, or builds networking equipment, these points are painfully obvious and therefore dismissed as trivial. Gordon, and less so Bob, thought that folks made far "too much fuss" (to quote Gordon) about all this. "It's obvious to anyone who knows how our industry works." That is true. But what's going on is not that practitioners don't think clearly about what they are working on, it's that the rest of the human race doesn't think in this way. Thus, the continuous surprise amongst the general population at the perceived astounding advancements of semiconductor, disk drives, networks, etc..... and the software which runs on them.
Keith, I don't think that removing humans from driving a car will get rid of car accidents. I do think it will greatly reduce them. Even as crude as the Tesla autopilot is, it's better than a drunk in many (if not most) cases. Which is the reason I cited the drunk driving fatalities in addition to the overall fatalities. Of course, our opinions on this won't matter. We'll know the answer within our lifetime. The benefits are so obvious and the commercial opportunity so large, that if this problem can be solved it will be within a decade or maybe two. We can circle back over a drink and discuss what happened.
Meantime, we can watch robot controlled rocket ships delivering supplies to the space station and ponder why we would ever send a human to Mars if we could just send better robots to figure out what we want to know.
(The above statement is made with the pre-existing caveat that I usually know "what" is going to happen and rarely know "when" it will happen.)