Graph Neural Networks for Natural Language Processing: A Systematic Literature Review

   page       attach   
sommario

Deep learning has recently begun to have great success in numerous fields of artificial intelligence, particularly in that of natural language processing (NLP). Traditionally, text inputs are represented as token sequences, with models such as bag of words, a representation of text that describes the occurrence of words within a document that keeps track of word counts and disregards the grammatical details, word order and location, word structure and its semantics; often along with some kind of scoring metric like TF-IDF, a numerical statistic intended to reflect how important a word is to a document in a collection or corpus by increasing proportionally to the number of times a word appears in a document, but is offset by the number of documents that contain the word. So, words that are common in every document, rank low even though they may appear many times, since they don’t mean much to that document in particular.
However, there is a huge variety of NLP problems that can be expressed with a graph structure, indeed popular deep learning techniques such as Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) have begun to generate a lot of interest for the realization of new deep learning techniques based on graphs, namely Graph Convolutional Networks (GCN) and Graph Recurrent Neural Networks (GRNN). These new deep learning tecniquest have been used in a large variety of graph-related problems, including NLP problems as the sentence structural information in text sequence can be exploited to augment original sequence data by incorporating the task-specific knowledge. Similarly, the semantic information in sequence data can be leveraged to enhance original sequence data as well.
The objective of this systematic literature review (SLR) is to investigate which new techniques have been developed in the context of knowledge injection that is, the act of combining the purely data-driven learning of neural networks with the infusion of knowledge from external sources, focusing on NLP applications and examining the pros and cons compared to the approaches considered traditional.

prodotti