Tuan Anh Le

Neurosymbolic generative models

17 July 2020

What are neurosymbolic generative models and why should we care? To answer this, let’s start with the shortcomings of current neural network approaches.

Given a lot of data, neural networks can be trained to be reasonable classifiers. For example, given a typical image of a cow, the model will output the label “cow”. These classifiers are reasonable most of the time, but also very brittle\(^1\). They fail to classify an image of a cow on the beach because they mistakenly learn to associate the “cow” label only with images that have grass backgrounds. Another way in which these classifiers are brittle is through adversarial examples. Modifying an image of a cow very slightly so that the change is imperceptible to humans, fools the classifier into thinking it’s a cat with high confidence, even if it is on a grass field. This is not as harmless when we think about the possibility of putting adversarial stickers on stop signs to fool self-driving cars!

Compare this to how humans learn. We often only need a few examples of a new concept to understand it. If you’ve never seen a segway before and were shown a few examples of it, you would instantly get what a segway is, since you’ve seen motorbikes, cars and bicycles before. Or, if you see a few examples of a new character from a foreign alphabet, you can reproduce it and tell it apart from other characters\(^2\).

In a neurosymbolic generative model, you specify the causal structure of a generative model of data, and learn all the rest using neural networks. For example, in the case of recognizing foreign characters, we can model the process of writing such a character. To form a character, you place one stroke after another from an abstract bank of stroke types like loops, straight lines, and so on. The exact shapes of the loops and straight lines can be hard to model perfectly so we can leave this to a neural network. Similarly a model of the stroke sequence—which stroke follows which—can also be left to a neural network. Given such a model of writing, when we see a character, we can infer the sequence of strokes that could have been used to generate this character using which we can easily reproduce the character or recognize another instance of it.

There are many components in this model, all of which serve an important role in closing the gap between brittle neural networks that require a lot of training data, and the generalizable, robust knowledge that humans learn from few examples. First, since the model is generative, it is robust to examples that are not in the training data, as long as it is possible for the generative model to generate such examples\(^3\). A character can be recognized as long as it can be generated as a sequence of strokes, however unlikely and regardless of whether it was in the training data. Second, since the model is causal, it is modular and hence easier to learn\(^4\). We can learn one model for producing the stroke sequence and another model for rendering it to an image, both of which are independent. To adapt to a new domain, we might only need to change the stroke sequence model while retaining the rendering model. In contrast, a model that recognizes characters directly from an image must be retrained from scratch to work on a different domain. Third, since there are explicit symbols representing concepts parts such as loops and straight lines, it is easy to recombine these symbols to form and generalize to unlikely concepts such as letters with excessively many adjacent loops\(^5\). Another instance of such generalization is our ability to picture completely unlikely scenes just by virtue of stringing words in new ways. For instance, you can easily imagine “a green room full of cows watching the World Cup on a sand-colored couch while slurping Belgian beer” and even more, you would be able to make sense of such a bizzare scene if you were to really see it. Lastly, neural networks are used to model parts that are hard to model explicitly, like the exact shape of a loop in a character.

Lake et al. designed a causal, generative and symbolic model of handwritten characters. Feinman and Lake added neural components and learned it using supervised stroke data. In our paper, we design a similar model and propose an algorithm to learn it in a completely unsupervised way, directly from images together with a neural recognition model. Can we apply this approach to cows, segways, and weird green rooms?


\(^1\) Geirhos et al. provide a good summary of the different ways neural networks can be brittle which they call “shortcut learning”.

\(^2\) Examples are from Lake et al..

\(^3\) This is a textbook example of the advantage of generative over discriminative approaches. See, for example, chapter 1.5 of Bishop’s book.

\(^4\) This is known as the principle of “independent causal mechanisms”. See, for example, section 5 of Bernhard Schölkopf’s paper.

\(^5\) Systematicity of thought is one of the main arguments for why we have a “language of thought”, which as far as I understand, says that our thoughts are represented using discrete symbols.

[back]