TY - GEN
T1 - Why are my Pizzas late?
AU - Fahland, Dirk
AU - Fournier, Fabiana
AU - Limonad, Lior
AU - Skarbovsky, Inna
AU - Swevels, Ava J.E.
PY - 2023
Y1 - 2023
N2 - We refer to explainability as a system's ability to provide sound and human-understandable insights concerning its outcomes. Explanations should accurately reflect causal relations in process executions [1]. This abstract suggests augmenting process discovery (PD) with causal process discovery (CD) to generate causal-process-execution narratives. These narratives serve as input for large language models (LLMs) to derive sound and human-interpretable explanations. A multi-layered knowledge-graph is employed to facilitate diverse process views. Background. Process discovery (PD) summarizes an event log L into a graph model M that represents activities and control-flow dependencies [2]. Most PD algorithms construct edges in M that indicates to which subsequent activities process control “flows to”. This relation is derived from traces by computing “temporally precedes” (<) and “directly precedes” (⋖) relations over activity names, and then discarding a < b iff a ⋖ b and b ⋖ a [3]. Advancements in Machine Learning (ML) have made ML models more complex, sacrificing explainability and resulting in “black box” models. This led to the emergence of external explanation frameworks, known as XAI, to enhance understandability [4]. XAI frameworks are predominantly applied post-hoc, after the ML model's training [5]. Causal discovery [6] infers causal graphs from data by exploring relationships like A →−c B where changes in A entail changes in B. In this work, we used the Linear Non-Gaussian Acyclic Model (LiNGAM) [7] for CD as in [1]. Inspired by[8], which highlights LLMs' ability to provide interpretable explanations, we aim to demonstrate that CD can enhance explanations of process execution outcomes when used as input for LLMs. LLMs are deep-learning models trained on text data, adept at few-shot and zero-shot learning using prompt-based techniques [9]. Approach. Our research aims are combining PD, CD, and XAI to generate narratives for improved process outcome explanations using LLMs. As a proof-of-concept (POC), we show how CD helps to leverage LLMs for more sound explanations. We use a multi-layered knowledge graph stored in a Neo4j database as infrastructure. We model the data using labeled property graphs in which each node and each relationship (directed edge) is typed by a label. Fig. 1 shows the graph schema. Each Event node has a timestamp, and is correlated to one case; the directly-follows relations describe the temporal order of all events correlated to the same case. These concepts allow modeling any event log in a graph [10].
AB - We refer to explainability as a system's ability to provide sound and human-understandable insights concerning its outcomes. Explanations should accurately reflect causal relations in process executions [1]. This abstract suggests augmenting process discovery (PD) with causal process discovery (CD) to generate causal-process-execution narratives. These narratives serve as input for large language models (LLMs) to derive sound and human-interpretable explanations. A multi-layered knowledge-graph is employed to facilitate diverse process views. Background. Process discovery (PD) summarizes an event log L into a graph model M that represents activities and control-flow dependencies [2]. Most PD algorithms construct edges in M that indicates to which subsequent activities process control “flows to”. This relation is derived from traces by computing “temporally precedes” (<) and “directly precedes” (⋖) relations over activity names, and then discarding a < b iff a ⋖ b and b ⋖ a [3]. Advancements in Machine Learning (ML) have made ML models more complex, sacrificing explainability and resulting in “black box” models. This led to the emergence of external explanation frameworks, known as XAI, to enhance understandability [4]. XAI frameworks are predominantly applied post-hoc, after the ML model's training [5]. Causal discovery [6] infers causal graphs from data by exploring relationships like A →−c B where changes in A entail changes in B. In this work, we used the Linear Non-Gaussian Acyclic Model (LiNGAM) [7] for CD as in [1]. Inspired by[8], which highlights LLMs' ability to provide interpretable explanations, we aim to demonstrate that CD can enhance explanations of process execution outcomes when used as input for LLMs. LLMs are deep-learning models trained on text data, adept at few-shot and zero-shot learning using prompt-based techniques [9]. Approach. Our research aims are combining PD, CD, and XAI to generate narratives for improved process outcome explanations using LLMs. As a proof-of-concept (POC), we show how CD helps to leverage LLMs for more sound explanations. We use a multi-layered knowledge graph stored in a Neo4j database as infrastructure. We model the data using labeled property graphs in which each node and each relationship (directed edge) is typed by a label. Fig. 1 shows the graph schema. Each Event node has a timestamp, and is correlated to one case; the directly-follows relations describe the temporal order of all events correlated to the same case. These concepts allow modeling any event log in a graph [10].
UR - http://www.scopus.com/inward/record.url?scp=85180150865&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85180150865
T3 - CEUR Workshop Proceedings
SP - 25
EP - 28
BT - International Workshop on Process Management in the AI Era, PMAI 2023
T2 - 2nd International Workshop on Process Management in the AI Era, PMAI 2023
Y2 - 19 August 2023 through 19 August 2023
ER -