Combining machine learning (ML) and computational logic (CL) is hard, mostly because of the inherently different ways they use to represent knowledge. In fact, while ML relies on fixed-size numeric representations leveraging on vectors, matrices, or tensors of real numbers, CL relies on logic terms and clauses, which are unlimited in size and structure. Graph neural networks (GNN) are a novelty in the ML world introduced for dealing with graph-structured data sub-symbolically. In other words, GNN pave the way towards the application of ML to logic clauses and knowledge bases. However, logic knowledge can be encoded into graphs in several ways, and which is the wisest one heavily depends on the particular task at hand. Accordingly, in this paper, we provide the following contributions: (I) we elicit a number of problems from the field of CL that may benefit from as many graph-related problems where GNN has been proved effective, (II) we exemplify the application of GNN to logic theories via an end-to-end toy example, to demonstrate the many intricacies hidden behind such practice; (III) we discuss the possible future direction concerning the application of GNN to CL in general, pointing out opportunities and open issues. |