Q288 : Vulnerability Analysis using BERT’s Language Model
Thesis > Central Library of Shahrood University > Computer Engineering > MSc > 2024
Authors:
Abstarct: Abstract
In the training process with BERT models, the findings demonstrate high performance and the model's ability to understand and process natural language. BERT, especially due to its use of bidirectional attention, is capable of identifying complex semantic relationships between words in the text and utilizing them to perform various natural language processing tasks. In experiments conducted with different BERT models, it was observed that the BERT-baxsed-uncased model outperforms the BERT-baxse-cased model, which may be due to better processing of data without sensitivity to uppercase and lowercase letters.
One of the most important findings is that training BERT models requires a large volume of data to effectively learn the semantic features of the text. Additionally, the training time for the models is significantly reduced when using powerful hardware such as GPUs and platforms like Google-Colab, which accelerates the learning process. These findings suggest that BERT is a powerful tool for improving performance in natural language processing tasks and can provide high accuracy in text data analysis and modeling.
Keywords:
#Keywords: Security vulnerabilities #large language models #BERT #cyberattacks #natural language processing: Keeping place: Central Library of Shahrood University
Visitor:
Visitor: