TY - JOUR
T1 - Generating domain models from natural language text using NLP
T2 - a benchmark dataset and experimental comparison of tools
AU - Bozyigit, Fatma
AU - Bardakci, Tolgahan
AU - Khalilipour, Alireza
AU - Challenger, Moharram
AU - Ramackers, Guus
AU - Babur, Önder
AU - Chaudron, Michel R.V.
PY - 2024/5/8
Y1 - 2024/5/8
N2 - Software requirements specification describes users’ needs and expectations on some target system. Requirements documents are typically represented by unstructured natural language text. Such texts are the basis for the various subsequent activities in software development, such as software analysis and design. As part of software analysis, domain models are made that describe the key concepts and relations between them. Since the analysis process is performed manually by business analysts, it is time-consuming and may introduce mistakes. Recently, researchers have worked toward automating the synthesis of domain models from textual software requirements. Current studies on this topic have limitations in terms of the volume and heterogeneity of experimental datasets. To remedy this, we provide a curated dataset of software requirements to be utilized as a benchmark by algorithms that transform textual requirements documents into domain models. We present a detailed evaluation of two text-to-model approaches: one based on a large-language model (ChatGPT) and one building on grammatical rules (txt2Model). Our evaluation reveals that both tools yield promising results with relatively high F-scores for modeling the classes, attributes, methods, and relationships, with txt2Model performing better than ChatGPT on average. Both tools have relatively lower performance and high variance when it comes to the relation types. We believe our dataset and experimental evaluation pave to way to advance the field of automated model generation from requirements.
AB - Software requirements specification describes users’ needs and expectations on some target system. Requirements documents are typically represented by unstructured natural language text. Such texts are the basis for the various subsequent activities in software development, such as software analysis and design. As part of software analysis, domain models are made that describe the key concepts and relations between them. Since the analysis process is performed manually by business analysts, it is time-consuming and may introduce mistakes. Recently, researchers have worked toward automating the synthesis of domain models from textual software requirements. Current studies on this topic have limitations in terms of the volume and heterogeneity of experimental datasets. To remedy this, we provide a curated dataset of software requirements to be utilized as a benchmark by algorithms that transform textual requirements documents into domain models. We present a detailed evaluation of two text-to-model approaches: one based on a large-language model (ChatGPT) and one building on grammatical rules (txt2Model). Our evaluation reveals that both tools yield promising results with relatively high F-scores for modeling the classes, attributes, methods, and relationships, with txt2Model performing better than ChatGPT on average. Both tools have relatively lower performance and high variance when it comes to the relation types. We believe our dataset and experimental evaluation pave to way to advance the field of automated model generation from requirements.
KW - Benchmark dataset
KW - Software functional requirements
KW - Software models
KW - Text-to-model transformation
UR - http://www.scopus.com/inward/record.url?scp=85192837407&partnerID=8YFLogxK
U2 - 10.1007/s10270-024-01176-y
DO - 10.1007/s10270-024-01176-y
M3 - Article
AN - SCOPUS:85192837407
SN - 1619-1366
VL - XX
JO - Software and Systems Modeling
JF - Software and Systems Modeling
IS - X
ER -