Q293 : Multi-step Prompt Optimization Using Language Models
Thesis > Central Library of Shahrood University > Computer Engineering > MSc > 2025
Authors:
Abstarct:
With the rapid growth of large language models (LLMs), these systems have become effective tools in a wide range of domains, including content generation, machine translation, question answering, text analysis, and decision support. The performance of these models is highly dependent on the quality and structure of the input prompts. Although LLMs demonstrate remarkable capabilities in natural language understanding, in many cases users cannot fully convey their intent, resulting in outputs that lack accuracy, coherence, and relevance.
Prompt engineering has emerged to address this issue, aiming to optimize the formulation of prompts for LLMs. Previous studies in this area have mainly focused on manual design or iterative refinement of prompts through trial and error. While such approaches can improve model performance, they are often costly, time-consuming, and dependent on human expertise, with limited scalability. Some recent methods employ gradient-baxsed optimization, reinforcement learning, or evolutionary algorithms; however, challenges remain in preserving meaning, avoiding contradictions, and ensuring consistency of outputs with the original intent.
This research presents a multi-step frxamework for prompt optimization that is not limited to a specific application and can be adapted to various natural language processing (NLP) tasks. The frxamework leverages the language model itself for evaluation, feedback generation, and rewriting. Two algorithms are proposed: a three-step version baxsed on discourse analysis and a five-step version incorporating semantic contradiction control. The frxamework includes a dedicated stage for detecting and controlling concept drift, preventing unintended changes to the meaning or inferences of the original text during rewriting, thereby ensuring that the final output—while linguistically improved—remains semantically consistent with the original version.
For empirical evaluation, a case study was conducted in the domain of scientific writing enhancement. In this study, the abstract, introduction, and literature review sections of 50 scientific articles were rewritten and optimized using the proposed algorithms. Results show that the generated versions significantly outperform the originals in linguistic quality, coherence, clarity, and content accuracy. These improvements were confirmed by three human evaluators and three independent LLM-baxsed assessments, demonstrating the effectiveness of the multi-step approach in improving scientific writing and highlighting its potential for adaptation to other NLP domains.
Keywords:
#Keywords: Large Language Models #Prompt Engineering #Multi-step Optimization #Concept Drift Detection #Scientific Rewriting Enhancement #Semantic Consistency #Natural Language Processing Keeping place: Central Library of Shahrood University
Visitor:
Visitor: