Wednesday, February 10, 2016

Why Gravitational Waves Are Nonsense

LIGO

As most of you have heard, tomorrow is the day of the big announcement from LIGO regarding their gravitational wave detection experiment. As my readers have come to expect, I always have a rebellious point of view in matters concerning physics, artificial intelligence and computer science. Once again, I aim to please. I detest relativistic physics about as much as I detest mainstream AI. Einsteinian physics has retarded progress in our understanding of the universe by at least a century, in my opinion. I have written about this on many occasions in the past (see the links at the end of this article). Below I explain in simple terms why gravitational waves are crap and why any announcement from anybody that such waves have been detected is either fraudulent or another pathetic error.

The Circus Has Not Left Town Yet

There are many reasons that gravitational waves are nonsense but what follows is my favorite. It is very simple and it won't take long. As usual, it has to do with infinite regress. In my research, I have found that nearly everything that is wrong with both classical and quantum physics is almost invariably due to infinite regress. So here goes.

Gravity affects everything that exists equally regardless of mass and this includes massless particles. Both Newton and Galileo understood this centuries ago, even though relativists claim that they are the ones who figured it out. Go figure.

The problem is that this undeniable principle means that gravity also affects gravitational waves. Since these waves affect themselves, they either cancel themselves out or amplify themselves recursively. The same objection applies to so-called curved space and to hypothetical intermediary particles such as gravitons. In other words, if it exists, regardless of its mass, gravity affects it. The infinite self-referential regress is too painful to even contemplate.

Conclusion

Paul Feyerabend was right when he wrote in Against Method, "the most stupid procedures and the most laughable result in their domain are surrounded with an aura of excellence. It is time to cut them down to size and to give them a lower position in society." Einsteinian physics is indeed laughable. To borrow the words of Wolfgang Pauli, it is not even wrong.

All relativists are out to lunch. Tomorrow, I'll be watching the whole circus unfold with a bag of Cheetos in one hand, a beer in the other and a smirk on my face.

Live LIGO Broadcast on YouTube

See Also:

How to Falsify Einstein's Physics, for Dummies
Why Space (Distance) Is an Illusion
How Einstein Shot Physics in the Foot
Nasty Little Truth About Spacetime Physics
Why Einstein's Physics Is Crap
Nothing Can Move in Spacetime
Physics: The Problem with Motion
Sitting on a Mountain of Crap, Wasting Time
Physicists Don't Know Shit

Thursday, January 28, 2016

Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI

DeepMind's GO Playing Program

In the wake of DeepMind's announcement that their GO playing program has defeated the European GO champion, I thought I would write a short post to express my usual rebellious point of view. DeepMind is claiming that this is another giant step toward true AI but this could not be further from the truth. In spite of all the hoopla and the back slapping, I can assure everyone that there has been no breakthrough and no giant step toward true general machine intelligence. This time though, I will let FaceBook's Mark Zuckerberg explain why DeepMind's GO program is not even close to true AI.

Zuckerberg Nails It

In a recent blog post, he had this to say about supervised learning, the kind of machine learning used by DeepMind (emphasis mine):
Our best guess at how to teach an AI common sense is through a method called unsupervised learning. My example of supervised learning above was showing a picture book to a child and telling them the names of everything they see. Unsupervised learning would be giving them a book and letting them figure out what to do with it. They could pick it up and by touching it learn to turn the pages. Or they could let go of it and realize it falls to the ground.

Unsupervised learning is learning how the world works by observing and trying things out rather than being told what to do. This is how most animals learn. It's key to building systems with human-like common sense because it doesn't require a person to teach it everything they know. It gives the machine the ability to anticipate what may happen in the future and predict the effect of an action. It could help us build machines that can hold conversations or plan complex sequences of actions -- necessary components for any authentic Jarvis.

Unsupervised learning is a long term focus of our AI research team at Facebook, and it remains an important challenge for the whole AI research community.

Since no one understands how general unsupervised learning actually works, we're quite a ways off from building the general AIs you see in movies. Some people claim this is just a matter of getting more computing power -- and that as Moore's law continues and computing becomes cheaper we'll naturally have AIs that surpass human intelligence. This is incorrect. We fundamentally do not understand how general learning works. This is an unsolved problem -- maybe the most important problem of this century or even millennium. Until we solve this problem, throwing all the machine power in the world at it cannot create an AI that can do everything a person can.
Zuckerberg is right.

See Also:

Why Deep Learning Is a Hindrance to Progress Toward True AI

Monday, December 21, 2015

Fear in Silicon Valley

OpenAI, the Savior of the World

Those of you who have an interest in artificial intelligence may have noticed a recent strange announcement from some of the big movers and shakers in Silicon Valley. We are told that humanity faces an imminent existential threat, the birth of true AI. It seems that the best way to defend ourselves against this potential evil is to make sure that any breakthrough in AI is quickly disseminated to the entire world. So they formed a nonprofit corporation called OpenAI to do just that. And they mean business. Silicon Valley luminaries like Elon Musk, Sam Altman, Peter Thiel and others have already committed one billion dollars to the company and they've brought in known experts from the machine learning research community.

Panic at the Top

In my opinion, the real purpose of OpenAI is not at all what its investors claim. Remember that these are people who are already heavily invested in various other for-profit companies that are hard at work on creating proprietary AI technologies. The conflict of interest is obvious. I think this is a sign of fear among the big dogs. I think OpenAI is just brain bait. Here is what I think happened. The lords of Silicon Valley have come to the realization that true AI may emerge from anywhere and not necessarily from the Googles, Baidus, Microsofts and FaceBooks of the world. It dawned on them that it is quite possible that some maverick genius working in a garage somewhere will figure it out before they do. After all, information is freely available on the net and fast cloud computing is getting cheaper everyday. OpenAI is essentially saying: "Look, we got truckloads of cash and we are the good guys. Come join forces with us."

It ain't gonna work. Why? Because whoever figures out AI will have no need of Silicon Valley's money, let alone their reptilian morality. If necessary, he or she will eat SV for breakfast before they realize what just happened to them. We live in interesting times.

PS. In a few days, I will delete a number of old AI articles from this blog because they are obsolete in certain areas. My understanding of intelligence and the brain has evolved tremendously in the last several years and months and those articles do not fully or accurately reflect my current thinking. I also know that a few among you read what I write to get ideas for your own AI projects. All I can say is that I am sorry if I misled you. When the time comes, all will be revealed.

See Also:

Why Deep Learning Is a Hindrance to Progress Toward True AI
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut

Wednesday, September 16, 2015

No Way Out

Indecision

I am torn between two options: releasing the results of my speech learning research to the public or erasing everything that I have worked on these many years. The world is in such a precarious state that I cannot think of any scenario in which the introduction of artificial intelligence will not do great harm to humanity. The problem with AI is not that intelligent machines might rebel against us (that prediction is just stupid materialist superstition), but that the powers that be will use the technology to gain even more power and impose their twisted will on humanity. The world could quickly turn into a hell much worse than anything George Orwell could have imagined. I would rather die than live with the knowledge that I might have contributed to it.

Saturday, August 29, 2015

Wednesday, June 3, 2015

Why Deep Learning Is a Hindrance to Progress Toward True AI

Supervised vs Unsupervised Learning

In a recent article titled Emtech Digital: Where Is AI Taking Us?, MIT Technology Review editor Will Night writes:
However, [Quoc] Le said that the biggest obstacle to developing more truly intelligent computers is finding a way for them to learn without requiring labeled training data—an approach called “unsupervised learning.”
This is interesting because we hear so much buzz lately about how revolutionary and powerful deep learning is and about how truly intelligent machines are just around the corner because of it. And yet, if one digs deeper, one quickly realizes that all this success is happening thanks to a machine learning model that will soon have to be abandoned. Why? Because, as Google Brain research scientist Quoc Le says, it is based on supervised learning.

No True AI Is Coming from the Mainstream AI Community Anytime Soon

I have reasons to believe that true AI is right around the corner but I don't see it coming from the mainstream AI community. Right now, they are all having a feeding frenzy over a soon to be obsolete technology. There is no question that deep learning is a powerful and useful machine learning technique but it works in a narrow domain: the classification of labeled data. The state of the art in unsupervised learning (no labels) has so far been a joke. The accuracy of current unsupervised deep neural networks, such as Google's cat recognition program, is truly abysmal (15% or less) and there is no clear path to success.

Time: The Universal Bottom-up Critic

One of the reasons that the performance of unsupervised machine learning is so pathetic, in my opinion, is that researchers continue to use what I call static data such as pictures to train their networks. Temporal information is simply ignored, which is a bummer since time is the key to the AI kingdom. And even when time is taken into consideration, as in recurrent neural networks, it is not part of a fundamental mechanism that builds a causal understanding of the sensory space. It is merely used to classify labeled sequences.

Designing an effective unsupervised learning machine requires that we look for a natural replacement for the top-down labels. As we all know, supervised or not, every learning system must have a critic. Thus the way forward is to abandon the top-down critic (i.e., label-based backpropagation) and adopt a universal bottom-up critic. It turns out that the brain can only work with temporal correlations of which there are two kinds: sensory signals are either concurrent or sequential. In other words, time should be the learning supervisor, the bottom-up critic. This way, memory is constructed from the bottom up and not top-down, which is as it should be.

The Deep Learning Killer Nobody Is Talking About

Other than being supervised, the biggest problem with deep neural networks is that, unlike the neocortex, they are completely blind to patterns they have never seen before. The brain, by contrast, can instantly model a new pattern. It is obvious that the brain uses a knowledge representation architecture that is instantly malleable and shaped by the environment. As far as I know, nobody in mainstream AI is working on this amazing capability of the brain. I am not even sure they are aware of it.

Conclusion: Be Careful

Sensory learning is all about patterns and sequences of patterns, something that mavericks like Jeff Hawkins have been saying for years now. The trick is to know how to use patterns and sequences to design the correct (there is only one, in my opinion) knowledge representation architecture. Hawkins is a smart guy, probably the smartest guy in AI right now, but I believe a few of his fundamental assumptions are wrong, not the least of which is his continued commitment to a probabilistic approach. As Judea Pearl put it recently, we are not probability thinkers but cause-effect thinkers. And this is coming from someone who has championed the probabilistic approach to AI throughout his career.

In conclusion, I will reiterate that the future of AI is both temporal and non-probabilistic. It may be alright to invest in deep learning technologies for now but be careful. Deep learning will become an obsolete technology much sooner than most people in the business believe.

See Also

In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
No, a Deep Learning Machine Did Not Solve the Cocktail Party Problem
Mark Zuckerberg Understands the Problem with DeepMind's Brand of AI

Wednesday, April 29, 2015

No, a Deep Learning Machine Did Not Solve the Cocktail Party Problem

Irresponsible Hype from MIT Technology Review

MIT Technology Review is running a story claiming that a group of machine learning researchers used a convolutional deep learning neural network to solve the cocktail party problem. Don't you believe it. The network that was used has to be pre-trained separately on individual vocals and musical instruments in order to separate out the vocals from the background music. In other words, it can only separate voice from music.

The human brain needs no such training. We can instantly latch on to any voice or sound, even one that we had never heard before, while ignoring all others. We have no trouble focusing on a strange voice speaking a foreign language in a room full of talking people, with or without music playing. This is what the true cocktail party problem is about. A deep learning network cannot pay attention to an arbitrary voice while ignoring the others. To do this, it would have to be pre-trained on all the voices individually.

Note: I posted a protest comment at the end of the article but MIT Tech Review editors chose to censor it. I guess it is easier to attract visitors with a lie than the truth.

It Is Not about Speech

Contrary to rumors, the cocktail party problem has nothing specifically to do with speech or sounds. To focus on individual sounds, the brain uses the same mechanism that it normally uses to pay attention to anything, be it a bird, the letters and words on the computer screen or grandma's voice. The attention mechanism of the brain is universal and is an inherent part of the architecture of memory and how objects are represented in it. Unlike deep learning neural networks, it does not have to be trained separately for every sound or object. The ability of the cortex to instantly model a novel visual or auditory object is a major part of the brain's attention mechanism.

It is clear that the auditory cortex can quickly model a new sound on the fly and tune its attention mechanism to it. No deep learning network can do that. And knowing what I know about how the brain's attention mechanism works, I can confidently say that no deep learning network can ever do that.

See Also:

Did OSU Researchers Solve the Cocktail Party Problem?
In Spite of the Successes, Mainstream AI is Still Stuck in a Rut
Why Deep Learning Is a Hindrance to Progress Toward True AI