Why are my Pizzas late?

Dirk Fahland, Fabiana Fournier, Lior Limonad, Inna Skarbovsky, Ava J.E. Swevels

Onderzoeksoutput: Hoofdstuk in Boek/Rapport/CongresprocedureConferentiebijdrageAcademicpeer review

Samenvatting

We refer to explainability as a system's ability to provide sound and human-understandable insights concerning its outcomes. Explanations should accurately reflect causal relations in process executions [1]. This abstract suggests augmenting process discovery (PD) with causal process discovery (CD) to generate causal-process-execution narratives. These narratives serve as input for large language models (LLMs) to derive sound and human-interpretable explanations. A multi-layered knowledge-graph is employed to facilitate diverse process views. Background. Process discovery (PD) summarizes an event log L into a graph model M that represents activities and control-flow dependencies [2]. Most PD algorithms construct edges in M that indicates to which subsequent activities process control “flows to”. This relation is derived from traces by computing “temporally precedes” (<) and “directly precedes” (⋖) relations over activity names, and then discarding a < b iff a ⋖ b and b ⋖ a [3]. Advancements in Machine Learning (ML) have made ML models more complex, sacrificing explainability and resulting in “black box” models. This led to the emergence of external explanation frameworks, known as XAI, to enhance understandability [4]. XAI frameworks are predominantly applied post-hoc, after the ML model's training [5]. Causal discovery [6] infers causal graphs from data by exploring relationships like A →−c B where changes in A entail changes in B. In this work, we used the Linear Non-Gaussian Acyclic Model (LiNGAM) [7] for CD as in [1]. Inspired by[8], which highlights LLMs' ability to provide interpretable explanations, we aim to demonstrate that CD can enhance explanations of process execution outcomes when used as input for LLMs. LLMs are deep-learning models trained on text data, adept at few-shot and zero-shot learning using prompt-based techniques [9]. Approach. Our research aims are combining PD, CD, and XAI to generate narratives for improved process outcome explanations using LLMs. As a proof-of-concept (POC), we show how CD helps to leverage LLMs for more sound explanations. We use a multi-layered knowledge graph stored in a Neo4j database as infrastructure. We model the data using labeled property graphs in which each node and each relationship (directed edge) is typed by a label. Fig. 1 shows the graph schema. Each Event node has a timestamp, and is correlated to one case; the directly-follows relations describe the temporal order of all events correlated to the same case. These concepts allow modeling any event log in a graph [10].

Originele taal-2Engels
TitelInternational Workshop on Process Management in the AI Era, PMAI 2023
Pagina's25-28
Aantal pagina's4
StatusGepubliceerd - 2023
Evenement2nd International Workshop on Process Management in the AI Era, PMAI 2023 - Macao, China
Duur: 19 aug. 202319 aug. 2023

Publicatie series

NaamCEUR Workshop Proceedings
Volume3569
ISSN van geprinte versie1613-0073

Workshop

Workshop2nd International Workshop on Process Management in the AI Era, PMAI 2023
Land/RegioChina
StadMacao
Periode19/08/2319/08/23

Vingerafdruk

Duik in de onderzoeksthema's van 'Why are my Pizzas late?'. Samen vormen ze een unieke vingerafdruk.

Citeer dit