We all have implicit beliefs, which we cling to without even thinking about. Well, that is why they are implicit, rather than explicit. For most part, implicit beliefs are good: they reduce the mental load of questioning everything and allow us to go along with our lives.
To break those implicit beliefs is hard, and requires sound proof. We salute those who can show our implicit beliefs to be wrong.
Before Einstein, there was implicit belief that the interval of time is independent of frame of reference. No one talked about it, because no one thought about it. It was just true. It required Einstein to break that belief. Time moves slower in a spaceship than on the Earth, from the frame of reference of someone on Earth.
Long ago people believed that the Earth is flat. Proof of the flatness of Earth is easy: just stand in a large plain (hard to find in our today's cities which are jungles of concrete) and observe for yourself.
Similarly, I had an implicit belief which got broken a few days ago.. Rather, I realized a few days ago that I had an implicit belief which had got broken.
That implicit belief was: "Understanding is something more than computation. Since computers are computation machines, they can never understand anything".
Sure, they can multiply two numbers faster than me, because of their silicon prowess. But they do not really "understand" multiplication. Sure, they can defeat me in chess because of their computation powers, but they do not really "understand" chess.
I held on to my belief, because for my computer to do anything, I had to do the "understanding" part, and leave the computation to the computer. It is my idiot savant. I understand the sorting algorithm, and code up the computer program to sort the numbers. I figure out Dijkstra's shortest path algorithm, and then code up the computer to do that. I know what exactly the computer is doing.
Deep learning is different. When you run a neural net, you do not really know what is happening. You can code up a neural net to recognize letters ("A", "B", etc.), but you do not tell the computer know anything about the shape of the letters. You let neural net figure that out.
Now I find it hard to hold on to the belief that I understand what is "A" and what is "B", while computer can only compute. I am not telling the computer how to recognize "A" and "B". I am leaving it to the computer to figure that out.
This shows that "understanding" is not something larger than "computation". Understanding is an emergent property of computation. Given enough examples, computers can understand what is letter "A" and what is letter "B".
This is discouraging at one level and encouraging at another level. It is discouraging because it brings computers one level closer to humans. A few decades ago, a computer defeated Gary Kasparov, the then world chess champion, and came one level closer to humans. But that contest was between computation (which is the forte of computers) and understanding (which is humans' forte). We consoled ourselves that we still have better understanding. Now that consolation is gone. Computers understand things as well as us, perhaps better.
On another level, it is encouraging. We have always wondered what constitutes understanding. Neurologists can see neurons firing and the electrical signals and all that, but in between all that molecular activity, where is understanding? We now know that understanding is an emergent property of computation. It is nothing more than computation. So, there is nothing more for neurologists to figure out. There is no magic going on there.
The dichotomy between computation and understanding is somewhat like dichotomy between hardware and software. What is software, after all? All that we see is hardware and instructions executing. Where is software? All program execution can be explained by hardware. Software is just the convenient name we give to certain behavior of hardware. Software is an emergent property of hardware, just a useful construct, a short cut to enable us to talk about hardware in comprehensible way. Similar is the case of understanding, it is just a convenient way of talking about computation.
For now we have one consolation left. We are "conscious". Computers might compute and computers might understand, but they are not sentient. So, we are at a higher pedestal than them. For now, that is a tenable argument and there is no need to challenge it, because we are busy making neural nets to make computer understand more and more things.
R.I.P. Understanding.