abstract
An interesting topic in Artificial Intelligence is the the ability of the system to learn the underlying patterns in data in order to achieve some kind of human-like reasoning. Indeed, humans are continually acquiring, representing and reasoning with new facts about the world. To make sense of the large quantity of information with which we are presented, we should compress structure and generalize from what we experience. This allows us to quickly understand new concepts and make useful predictions about them. Such an ability is useful when collecting data is hindered by:
- Limited data: data available is limited in terms of feature coverage since these systems typically run in an operationally optimized settings and to collect data outside this narrow range is usually exepnsive or even unsafe.
- Expensive data: in some instances, for example manufacturing facilities, collection of data may be disruptive or require destructive measurements.
- Poor quality data: quality of data collected from physical infrastructure systems is usually poor (e.g., missing, corrupted, or noisy data) since they typically have old and legacy components.
The injection of prior-knowledge into learning process is a fundamental step in overcoming these limitations. Firstly, it does not require the training process to induce this knowledge from the training set, therefore reducing the number of required training data. Secondly, prior knowledge can be used to express the desired behavior of the learner on any input, providing better behavior guarantees in an adversarial or uncontrolled environment. Such a knowledge is usually represented as rules to perform human-like reasoning and inference.
outcomes