The more intelligent systems based on sub-symbolic techniques pervade our everyday lives, the less human can understand them. This is why symbolic approaches are getting more and more attention in the general effort to make AI interpretable, explainable, and trustable. Understanding the current state of the art of AI techniques integrating symbolic and sub-symbolic approaches is then of paramount importance, nowadays—in particular in the XAI perspective. In this talk we first provides an overview of the main symbolic/sub-symbolic integration techniques, focussing in particular on those targeting explainable AI systems. Then we expand the notion of “explainability by design” to the realm of multi-agent systems, where XAI techniques can play a key role in the engineering of intelligent systems.