Q224 : Abstractive Summarization Using Deep Learning
Thesis > Central Library of Shahrood University > Computer Engineering > MSc > 2022
Authors:
[Author], Fatemeh Jafarinejad[Supervisor], Mohsen Rezvani[Supervisor]
Abstarct: In order to reduce the size of the document while preserving the meaning, summarization is one of the fields of natural language processing. This issue is very important in today's information age. Due to the large volume of text documents, it is difficult and time-consuming to obtain the desired information and understand the contents. Therefore, the importance of summarizing texts in applications such as medical research, biology, news, etc. has been welcomed by researchers. Currently, automatic text summarization is divided into single-document and multi-document categories in a general division, and two methods of selective (extractive) summarization and abstract summarization are used. In the selective summarization method, different parts of the text that are important are selected and placed next to each other with a suitable arrangement and are considered as a summary. But in the second method, i.e. abstract summarization, extraction of the main meaning of the text is done without necessarily using the sentences of the original text. State-of-the-art approaches to abstract summarization are data-driven and rely on the availability of large collections of articles paired with abstracts. In this thesis, we investigate the effectiveness of deep learning methods for abstract text summarization. In this way, the optimal compression of the original document is done and at the same time the key concepts in the original document are preserved. Considering the choice of the topic that we need a lot of data and powerful algorithms to solve this issue, we used pre-learned transfer networks so that we can provide a favorable summary with high accuracy for the interested and the general public. In the architecture of this work, models such as PEGASUS, T5 and BART have been used, each of which has been trained with more than 750 gigs of cleaned text information received from internet pages. We were able to configure these models ideally for abstract text summarization applications. Finally, by collective learning of these 3 models and setting the parameters in each model, the final optimal architecture was obtained. In the stage of evaluating the accuracy of ROUGE1 test data with the collective learning model in which BART, T5 and PEGASUS models were used, we reached the number 43.3 and the second collective learning model which used the PEGASUS model reached the number 44.5. These numbers compared to the rouge1 benchmark of 41.2 obtained for the Pegasus model alone, indicate the efficiency of using collective learning methods in this field of natural language processing.
Keywords:
#Keywords: Natural Language Processing #Abstractive Summarization #Transfer Learning #Deep Learning Keeping place: Central Library of Shahrood University
Visitor: