Spiking / time-domain models are definitely an answer to the efficiency problem.
I think neuromorphic hardware is putting the cart before the horse. We should start with neuro evolution experiments that seek to discover effective recurrent spiking topologies.
There are non-linear networks that are so efficient that we wouldn't need specialized hardware to run them. The trade off is that they're incredibly hard to find. I think we might have enough compute on hand now.
Assuming a generalist online learning model exists, we'd only have to find it one time. This isn't like back propagation. Activation = learning when techniques like spike timing dependent plasticity are used.
This is about "neuromorphic computing", a model invented in 1980's which colocates memory and computer, just like real brains. Plus about "memristor", a weird electronic part invented in 1970's. People have been doing research since then, and yet there is no progress.
This is yet another call-to-action, this time under "AI consumes too much energy" sauce. I've seen those for more than two decades, and it nothing ever came out of this.
A special mention for this paragraph:
> The programmability challenge is perhaps the most significant. The von Neumann architecture comes with 80 years of software development, debugging tools, programming languages, libraries, and frameworks. Every computer science student learns to program von Neumann machines. Neuromorphic chips and in-memory computing architectures lack this mature ecosystem.
This is total B.S, especially with application to AI - there is no need for "ecosystem" of millions of software libraries, there is a handful of algorithms that you need to run and that's it, the thing can earn money. And of course plenty of people work with FPGA's or custom logic which has nothing to do with von Neumann machines - and they get things done. If you have a new technology and you cannot build even a few sample apps on it... don't blame establishment, it just means that your technology does not work.
I think they have a point about merging the CPU & memory. It seems to have worked out well for Apple. Their proposal sounds like another step in the same direction.
Spiking / time-domain models are definitely an answer to the efficiency problem.
I think neuromorphic hardware is putting the cart before the horse. We should start with neuro evolution experiments that seek to discover effective recurrent spiking topologies.
There are non-linear networks that are so efficient that we wouldn't need specialized hardware to run them. The trade off is that they're incredibly hard to find. I think we might have enough compute on hand now.
Assuming a generalist online learning model exists, we'd only have to find it one time. This isn't like back propagation. Activation = learning when techniques like spike timing dependent plasticity are used.
I wonder how many of those 20 Watts has nothing to do with thinking but keeping us alive, alert to environment, noise and other signals etc.
I wouldn't be surprised if it is closer to 10 Watts vs 100 Megawatts. We are 10 million times more efficient.
This is about "neuromorphic computing", a model invented in 1980's which colocates memory and computer, just like real brains. Plus about "memristor", a weird electronic part invented in 1970's. People have been doing research since then, and yet there is no progress.
This is yet another call-to-action, this time under "AI consumes too much energy" sauce. I've seen those for more than two decades, and it nothing ever came out of this.
A special mention for this paragraph:
> The programmability challenge is perhaps the most significant. The von Neumann architecture comes with 80 years of software development, debugging tools, programming languages, libraries, and frameworks. Every computer science student learns to program von Neumann machines. Neuromorphic chips and in-memory computing architectures lack this mature ecosystem.
This is total B.S, especially with application to AI - there is no need for "ecosystem" of millions of software libraries, there is a handful of algorithms that you need to run and that's it, the thing can earn money. And of course plenty of people work with FPGA's or custom logic which has nothing to do with von Neumann machines - and they get things done. If you have a new technology and you cannot build even a few sample apps on it... don't blame establishment, it just means that your technology does not work.
I think they have a point about merging the CPU & memory. It seems to have worked out well for Apple. Their proposal sounds like another step in the same direction.