Tuan Anh Le

Build a baby and let it grow

4 December 2021

The Dream

It would be great if we had robots that could clean our homes, cook us a nice meal, and after the meal, bring the dishes into the kitchen and clean them. Not only would such automation free ourselves from such menial tasks, it could really enrich lives of people unable to do these things. What if we had personal assistants that could truly understand us, like other human beings would. We could talk to them about our ideas, problems, hopes, and it would know how to read between the lines and hold a sensible conversation. Or what if we had robots that could go into places hit by disasters to survey and perform rescue missions which are too dangerous and risky for human rescuers. These are just few examples of how having a human-like AI would truly be transformational to us.

Climbing a tree to get to the moon?

There are many exciting developments in AI like GPT-3 which can generate seemingly human-like text, or Codex which can generate seemingly good code, both of which are powered by Transformers—big neural networks trained on huge datasets of text and code. Nevertheless, there are fundamental limitations of current AI systems which seem far from solved.

First, current models don’t generalize. A self-driving car trained on millions of kilometers of driving data is still stumped when faced with a previously unseen situation. For example, a white truck toppled on its side is mistaken for a white sky. There are adversarial examples in which human-imperceptible changes to the input result in drastic and arbitrary changes to the system outputs. There are almost infinite new things we humans can imagine, recognize and think about which are unthinkable for current machines. Think of a large inflatable cat of the size of a football field, floating in the sky among white clouds, casting a shadow onto the street full of motorcycles. This is an entirely made up scene and yet you’re able to imagine it. Current machines are far from this.

Second, current models don’t handle uncertainty. They know how much they know and whether they know at all. When we walk on a street and our view is blocked by a large building, we can hypothesize that either there is a car coming, but also that there are no cars coming. And in face of this inherent uncertainty, we can make a decision to slow down since the risk of being hit by a car is too high.

Third, current models are neither interpretable nor explainable. When an algorithm makes a decision from an X-ray image, it doesn’t really tell us why it made such a decision. And if that decision turns out to be a mistake, we will definitely not have any explanation for why the system made such a mistake. With humans, we could probe the reasoning at every step which reassures us when faced with high stake decisions such as whether to go for a surgery or not.

Following the current trend of bigger models and bigger data, the next step would be to collect more data and make our models bigger. Would this solve the above problems, or would we be climbing a tree higher to get to the moon? Climbing a tree higher is definitely useful—we can reach more fruits—but we will definitely not get to the moon!

We know we can be intelligent

Is there magic behind our intelligence? Or is it merely the result of neurons following physical laws? If you think it’s the former, we are likely out of luck but if you think it’s the latter, there is hope.

If we humans can be intelligent and do all these wonderful things, and we’re not relying on magic, but on physical processes in our physical brain, so can a machine. We only need to arrange things correctly. If we can be intelligent, so can a machine!

Build a baby and let it grow

If we take the previous argument seriously, one thing we could do is to understand what adults already know and try to program a machine to have the same knowledge. This can be difficult as adults are so different from each other. Adults have such a diverse set of knowledge and encoding all of it would be a nightmare.

It could be easier to build a starting state of a human mind and the learning processes that would get it into the state of an adult human mind. In other words, we can build a baby and let it grow.

To build a human-like AI, we can either build the red part or the blue part which is probably easier to build.

[back]