Moderator: Soñadora
Benno von Humpback wrote:Jamie wrote:It's amusing now, but I think the improvement will come fast. Every time I click on a captcha I feel like I'm helping refine their driving algorithms. Is that why am I having to click on the traffic lights?
If they're as bad at that as I am, watch out!
kdh wrote:Benno von Humpback wrote:Jamie wrote:It's amusing now, but I think the improvement will come fast. Every time I click on a captcha I feel like I'm helping refine their driving algorithms. Is that why am I having to click on the traffic lights?
If they're as bad at that as I am, watch out!
I'm skeptical mostly because of my work in artificial intelligence in the 80s when there was similar apparent promise that was unfulfilled. The current crop of people touting AI's potential weren't around in the 80s. Yes, even though "Moore's Law" (exponential growth of transistor density) is dead, computers are faster and have a shit-ton more memory than in the 80s, and we have much more data available, but machine learning, even modern "deep learning" (a neural net with hidden layers) is still nothing more than learning by example, which is hugely limiting in its potential efficacy.
We're naturally fooled by the notion that if we humans can do something it's easy to teach a computer to do it. This is basically Musk's argument against using LIDAR sensors in Teslas--they're not needed because we humans don't use LIDAR. This is laughably naive.
kdh wrote:Benno von Humpback wrote:Jamie wrote:It's amusing now, but I think the improvement will come fast. Every time I click on a captcha I feel like I'm helping refine their driving algorithms. Is that why am I having to click on the traffic lights?
If they're as bad at that as I am, watch out!
I'm skeptical mostly because of my work in artificial intelligence in the 80s when there was similar apparent promise that was unfulfilled. The current crop of people touting AI's potential weren't around in the 80s. Yes, even though "Moore's Law" (exponential growth of transistor density) is dead, computers are faster and have a shit-ton more memory than in the 80s, and we have much more data available, but machine learning, even modern "deep learning" (a neural net with hidden layers) is still nothing more than learning by example, which is hugely limiting in its potential efficacy.
We're naturally fooled by the notion that if we humans can do something it's easy to teach a computer to do it. This is basically Musk's argument against using LIDAR sensors in Teslas--they're not needed because we humans don't use LIDAR. This is laughably naive.
TheOffice wrote:Crash repair would probably depend on whether the battery pack was damaged.
Beau the Tesla charger could be made compatible with Porsche. I just don’t see why Tesla would do it.
Chris Chesley wrote:As a car manufacturer (not me, but if I were) I'd still be pretty wary of a wholesale shift to only all-electric vehicles. When was the last study/ audit made regarding overall national electric capacity made? If EV's were to become 5,10, or 20% of the vehicle population in just a few years, do we REALLY have the capacity to charge them with our current generating infrastructure? Without more CO2 emissions as well?
Secondarily, while I would welcome small, lightweight, long(ish) range EV's, I would not have much (any?) desire to share the freeways with even half of the existing larger, heavier ICE vehicles, not even counting trucks.
Another concern I have it the exponential growth of building, maintaining and disposing of or recycling batteries may not be as wonderful as it seems when done on a larger scale. ( i.e. rare earths availability is currently constrained, energy req'd to build the batteries may exceed energy ultimately delivered by the batters (see similar issues with PV panels) and hazmat/disposal issues)
Overall, I suspect there may be a few follow on consequences of the wholesale shift towards EV's in our transportation networks. What works well on an individual basis may not really scale up so well....
Jamie wrote:kdh wrote:Benno von Humpback wrote:Jamie wrote:It's amusing now, but I think the improvement will come fast. Every time I click on a captcha I feel like I'm helping refine their driving algorithms. Is that why am I having to click on the traffic lights?
If they're as bad at that as I am, watch out!
I'm skeptical mostly because of my work in artificial intelligence in the 80s when there was similar apparent promise that was unfulfilled. The current crop of people touting AI's potential weren't around in the 80s. Yes, even though "Moore's Law" (exponential growth of transistor density) is dead, computers are faster and have a shit-ton more memory than in the 80s, and we have much more data available, but machine learning, even modern "deep learning" (a neural net with hidden layers) is still nothing more than learning by example, which is hugely limiting in its potential efficacy.
We're naturally fooled by the notion that if we humans can do something it's easy to teach a computer to do it. This is basically Musk's argument against using LIDAR sensors in Teslas--they're not needed because we humans don't use LIDAR. This is laughably naive.
From what I understand of machine learning, no teaching or learning the way we understand it is performed at all, and we actually don’t know the underlying means by which the “taught” algorithm performs its tasks.
It only needs to be better than us, and we’re not tha great.
BeauV wrote:There is no real mechanism in our economy to charge the builder of a truely bad device or service for the damage they do if customers want it.
We have a deeply mistaken belief in many of the supposed benefits of what we laughingly call a "free market". An inability to deal with long term consequences of what we do is just one of them. But, you're absolutely correct, it's a terrible aspect of our stupidity as a people. (Rant Off)
kdh wrote:Jamie wrote:kdh wrote:Benno von Humpback wrote:Jamie wrote:It's amusing now, but I think the improvement will come fast. Every time I click on a captcha I feel like I'm helping refine their driving algorithms. Is that why am I having to click on the traffic lights?
If they're as bad at that as I am, watch out!
I'm skeptical mostly because of my work in artificial intelligence in the 80s when there was similar apparent promise that was unfulfilled. The current crop of people touting AI's potential weren't around in the 80s. Yes, even though "Moore's Law" (exponential growth of transistor density) is dead, computers are faster and have a shit-ton more memory than in the 80s, and we have much more data available, but machine learning, even modern "deep learning" (a neural net with hidden layers) is still nothing more than learning by example, which is hugely limiting in its potential efficacy.
We're naturally fooled by the notion that if we humans can do something it's easy to teach a computer to do it. This is basically Musk's argument against using LIDAR sensors in Teslas--they're not needed because we humans don't use LIDAR. This is laughably naive.
From what I understand of machine learning, no teaching or learning the way we understand it is performed at all, and we actually don’t know the underlying means by which the “taught” algorithm performs its tasks.
It only needs to be better than us, and we’re not tha great.
Yes, this is the way it's characterized by most who write about it, and as such is a common view.
A neural net is a map, a function, that takes input data, driving information from sensors in this case, to an output, the steering and accelerator controls of the car. The function is parametrized, i.e., has neural net "weights" learned, or more often called "trained," from input-output pairs, such as from the Captcha data we've all been providing on identifying traffic lights, motorcycles, pedestrians, etc. The functional form of a neural net is often described as "non-linear," and indeed it's easy to prove that even without a hidden layer, associated with so-called "deep learning," a net can approximate any practical function with arbitrarily good accuracy.
In other words, to use perfectly adequate terms, the neural net represents a "statistical fit to data." A good reference to these modern statistical techniques is "Pattern Recognition and Neural Networks" by Brian Ripley if you want to cut through all the silly and unnecessary new terms and perspectives and just understand the math.
kdh wrote:BeauV wrote:There is no real mechanism in our economy to charge the builder of a truely bad device or service for the damage they do if customers want it.
We have a deeply mistaken belief in many of the supposed benefits of what we laughingly call a "free market". An inability to deal with long term consequences of what we do is just one of them. But, you're absolutely correct, it's a terrible aspect of our stupidity as a people. (Rant Off)
I'll argue that our system has a mechanism--government regulation. Carbon taxes or battery-disposal taxes or generally rules that shift incentives to protect the public good. To me anyway, a pure profit motive is inadequate.
Jamie wrote:I don’t see the difference with “organic “ learning except that computers don’t forget and can have access to larger data sets.
“Free” market is a myth. Even chaotic systems develop “rules” over time.
BeauV wrote:Jamie wrote:I don’t see the difference with “organic “ learning except that computers don’t forget and can have access to larger data sets.
“Free” market is a myth. Even chaotic systems develop “rules” over time.
Jamie, I agree that computers have massive data sets to work with and they appear to process it faster than we can. But.... I think that we are attending to the conscious mind, the one that thinks rationally and/or logically. We are ignoring, or at the least disregarding, the unconscious mind. I believe this is a flaw in our understanding of "thought" and "understanding".
From what I've read, Mozart simply wrote down the music as he knew it should be. He wasn't "composing" he was "documenting" what was flowing through his mind. I've considered this for a long time and feel that there is some other form of thought which isn't available to the linguistically and logically oriented bits of our brains. It is the "other brain" which takes over when one is skiing a really difficulty line, when driving insanely fast and the world slows down, when sitting at the piano and just letting whatever is in there come out. It's then that I feel that other metal capacity straining to get out.
It's fun to consider what capacity is held by this other part of the brain.
When faced with an insanely difficult problem, I go for a walk. I try to get lost. I try to "not think". Then, all at once, the answer appears. All at once I know what the solution is. I have no idea how that knowledge entered my mind, but it's obviously correct. It's blindingly correct.
I've spent most of my adult life trying to be able to call up this feeling on demand; mostly failing.
Perhaps this is what the zen master really knows: how to enter this mental state. I have no idea how to do it.
Olaf Hart wrote:That intuitive mind works on pattern recognition Beau.
I am sure Eric has a lot more to tell us about this, it’s a very interesting topic.
kdh wrote:On the brain and how we think, I only know what I read in "Thinking, Fast and Slow" by Daniel Kahneman. Great book.
In school I lived down the road from where Emily Dickinson grew up and I used to walk to campus. All creativity was accomplished on those walks.
TheOffice wrote:I’ve never considered 128 a controlled environment!
kdh wrote:TheOffice wrote:I’ve never considered 128 a controlled environment!
Joel, you have an ex-wife in Concord or is that someone else?
SHANGHAI—A little-known Chinese company has become the world’s biggest maker of electric vehicle batteries.
Beijing engineered a scenario that didn’t give the world much choice.
China is by far the biggest EV market, and to boost its standing in the fast-growing industry, China began pressuring foreign auto makers to use locally-made batteries in the country several years ago. One company—Contemporary Amperex Technology Ltd., known as CATL—was the only shop capable of producing them at scale.
Auto makers weren’t pleased, but they fell in line. During a visit to CATL headquarters in 2017, three Daimler AG executives displayed their irritation shortly after the meeting started, recalled Jiang Lingfeng, then a CATL project manager who prepared a technical briefing for the visitors.
One Daimler executive cut off his briefing, said Mr. Jiang. “We’re not interested,” the executive said, according to Mr. Jiang. “The only reason we’re here is that we have no choice, so let’s just talk about the price.”