Friday, June 29, 2018

Sparse Pattern Recognition on Cheap Hardware

Just a quick note. Pattern recognition in spiking neural networks is almost magical. Once the network is properly trained, good recognition performance requires amazingly little information and is strongly immune to noise and distortion. This is what allows us to see shapes in the clouds and recognize a huge variety of typefaces and handwriting styles. It is all due to the magic of timing. The really good news is that it is possible to get excellent results using a regular multi-core processor because one can disable a lot of random neurons in a trained spiking neural network without significantly affecting performance.

Have a good weekend.

4 comments:

Louis Savain said...

I'm hoping to have some kind of demo program in the not too distant future. I am also planning to publish an article on my new understanding of pattern learning in the brain.

Spent Death said...

Did you check out Grossberg's out-start/in-star networks (from the late/mid 1970s, the earliest work on this was from the late 50s to mid 60s. Look for shunting membrane models with spiking leaky-integrate-fire neurons, for a more biologically plausible simulation, which generalize these methods). He has a very simple design that converges fast, which he proves fast convergence asymptotically. I mentioned in a previous post a while back that you can do it instantly, without even using neurons, or randomness.

Radha Sai said...

Nice post.Keep updating Artificial intelligence Online Trining

Radha Sai said...

Nice post. Keep updating Artificial intelligence Online Trining