This page in Swedish

Research projects

Consistency of GPT Models in Classifying Natural Language Requirements

About this project

Project information

Project status

In progress 2024 - 2025

Contact

Fredrik Karlsson

Research subject

Research environments

Requirements engineering is a cornerstone of systems development. Typically, requirements are documented in natural language, i.e., natural language requirements, serve as a bridge between stakeholders’ expectations and the envisioned solution. However, as the volume of requirements grows, the likelihood of overlooking critical aspects of these expectations also rises. To address this challenge, classifying natural language requirements proves invaluable, helping to organize and prioritize requirements. Natural language requirements can for example be categorised into functional requirements and non-functional requirements, and these classes be divided further.

In this project we explore how large language models can be used to support the task of classifying natural language requirements. In particular we are interested in the use of Generative Pretrained Transformer (GPT) models, such as OpenAI’s GPT-series. GPT models seem less favoured for this type of categorisation task, partly due to concerns about their inconsistency in producing reliable results. The aim of this project is to investigate the consistency when GPT models classify NLRs using zero-shot and few shot learning.

Research funding bodies

  • Örebro University